Tag Archives: data
Marketers, Are You Ready? The Impending Data Explosion from the New Gizmos and Gadgets Unveiled at CES
This is the first year in a very long time that I wasn’t in Las Vegas during CES. Although it’s not quite as exciting as actually being there, I love that the Twitter-verse and industry news sites kept us all up to date about the latest and greatest announcements. Now that CES2015 is all wrapped up, I find myself thinking about the potential of some very interesting announcements – from the wild to the wonderful to the leave-you-wondering! What strikes me isn’t how useful these new gizmos and gadgets will likely be to myself and my consumer counterparts, but instead what incredible new data sources they will offer to my fellow marketers.
One thing is for sure… the connected “Internet of Things” is indeed here. It’s no longer just a vision. Sure, we’re just seeing the early stages, but it’s becoming more and more main stream by the day. And as marketers, we have so much opportunity ahead of us!
I ran across an interesting video interview on the CES show floor with Jack Smith from GroupM on Adweek.com. Jack says that “data from sensors will have a bigger impact, longer term, than the Internet itself.” That is a lofty statement, and I’m not sure I’ll go quite that far yet, but I absolutely agree with his premise… this new world of connectivity is already shifting marketing, and it will almost certainly radically change the way we market in the near future.
Riding the Data Explosion (Literally)
The Connected Cycle is one of the announcements that I find intriguing as a marketer. In short, it’s a bike pedal equipped with GPS and GPRS sensors that “monitor your movements and act as a basic fitness tracker.” It’s being positioned as a way to track stolen bicycles, which is a massive problem in Europe particularly, with the side benefit of being a powerful fitness tracker. It may not be as sexy as some other announcements, but I think there is buried treasure in devices like these.
Imagine how powerful that data would be to a sporting goods retailer? What if the rider of that bicycle had opted into a program that allowed the retailer to track their activity in exchange for highly targeted offers?
Let’s say that the rider is nearing one of your stores and it’s a colder than usual day. Perhaps you could push them an offer to their smart phone for some neoprene booties. Or let’s say that, based on their activity patterns, the rider appears to be stepping up their activity and is riding more frequently suggesting they may be ready for a race you are sponsoring in a few months in the area. Perhaps you could push them an inspirational message saying how great they’re progressing and had they thought about signing up for the big race, with a special incentive of course.
The segmentation possibilities are endless, and the analytics that could be done on the data leaves the data-driven marketer salivating!
Home Automation Meets Business Automation
There were numerous announcements about the connected “house of the future”, and it’s clear that we are just beginning of the home automation wave. Several of the big dogs like Samsung, Google, and Apple are building or buying automation hub platforms, so it’s going to be easier and easier to connect appliances and other home devices to one another, and also to mobile technology and wearables. As marketers, there is incredible potential to really tap into this. Imagine the possibility of interconnecting your customers’ home automation systems with your own marketing automation systems? Marketers will soon be able literally serve up offers based upon things that are occurring in the home in real time.
Oh no, your teenage son finished off all but the last drop of milk (and put the almost-empty jug back in the fridge without a second thought)! Not to worry, you’ve linked your refrigerator’s sensor data with your favorite grocery store. An alert is sent asking if you want more milk, and oh by the way, your shopping patterns indicate you may be running out of your son’s favorite cereal too, so it offers you a special discount if you add a box to your order. Oh yeah, of course he was complaining about being out just yesterday! And whala, a gallon of milk and some Cinnamon Toast Crunch magically arrives at your door by the end of the day. Heck, it will probably arrive within an hour via a drone if Amazon has anything to say about it! No manual business processes whatsoever. It’s your appliance’s sensors talking to your customer data warehouse, which is talking to your marketing automation system, which is talking to a mobile app, which is talking to an ordering system, which is talking to a payment system, which is talking to a logistics/delivery system. That is, of course, if your internal processes are ready!
Some of the More Weird and Wacky, But There May Just Be Something…
Panasonic’s Smart Mirror allows you to analyze your skin and allows you to visualize yourself with different makeup or even a different haircut. Cosmetics and hair care companies should be all over this. Imagine the possibilities of visualizing yourself looking absolutely stunning – if only virtually – with perfect makeup and hair. Who wouldn’t want to rush right out and capture the look for real? What if a store front could virtually put the passer-byer in their products, and once the customer is inside the store, point them to the products that were featured? Take it a step further and send them a special offer the next week to come back buy the hat that just goes perfectly with the rest of the outfit. It all sounds a little bit “Minority Report-esque”, but it’s closer to becoming true every day. The power of the interconnected world is endless for the marketer.
And then there’s Belty… it’s definitely garnered a lot of news (and snarky comments too!). Belty is a smart belt that slims or expands based upon your waist size at that very moment – whether you’re sitting, standing, or just had a too-large meal. I don’t see Belty taking off, but you never know! If it does however, can’t you just see Belty sending a message to your Weight Watchers app about needing to get back on diet? Or better yet, pointing you to the Half Yearly Sale at Nordstrom because you’re getting too skinny for your pants?
The “Internet of Things” is Becoming Reality… Is Your Marketing Team Ready?
The internet of things is already changing the way consumers live, and it’s beginning to change the way marketers market. With the It is critical that marketers are thinking about how they can leverage the new devices and the data they provide. Connecting the dots between devices can become a marketer’s best friend (if they’re ready), or worst enemy (if they’re not).
Are you ready? Ask yourself these 6 questions:
- Are your existing business applications connected to one another? Do your marketing systems “talk” to your finance systems and your sales systems and your customer support systems?
- Do you have fist-class data quality and validation technology and practices in place? Real-time, automated processes will only amplify data quality problems.
- Can you connect easily to any new data source as it becomes available, no matter where it lives and no matter what format it is in? The only constant in this new world is the speed of change, so if you’re not building processes and leveraging technologies that can keep up, you’re already missing the boat!
- Are you building real time capabilities into your processes and technologies? You systems are going to have to handle real-time sensor data, and make real-time decisions based on the data they provide.
- Are your marketing analytics capabilities leading the pack or just getting out of the gate? Are they harnessing all of the rich data available within your organization today? Are you ready to analyze all of the new data sources to determine trends and segment for maximum effect?
- Are you talking to your counterparts in IT, logistics, finance, etc. about the business processes and technologies you are going to need to harness the data that the interconnected world of today, and of the near future? If not, don’t wait! Begin that conversation ASAP!
Informatica is ready to help you embark on this new and exciting data journey. For some additional perspectives from Informatica on the technologies announced at CES2015, I encourage you to read some of my colleagues’ recent blog posts:
As I have shared within other posts within this series, businesses are using analytics to improve their internal and external facing business processes and to strengthen their “right to win” within the markets that they operate. For pharmaceutical businesses, strengthening the right to win begins and ends with the drug product development lifecycle. I remember, for example, talking several years ago to the CFO of major pharmaceutical company and having him tell me the most important financial metrics for him had to do with reducing the time to market for a new drug and maximizing the period of patent protection. Clearly, the faster a pharmaceutical company gets a product to market, the faster it can begin to earning a return on its investment.
Fragmented data challenged analytical efforts
At Quintiles, what the business needed was a system with the ability to optimize design, execution, quality, and management of clinical trials. Management’s goal was to dramatically shorten time to complete each trial, including quickly identifying when a trial should be terminated. At the same time, management wanted to continuously comply with regulatory scrutiny from Federal Drug Administration and use it to proactively monitor and manage notable trial events.
The problem was Quintiles data was fragmented across multiple systems and this delayed the ability to make business decisions. Like many organizations, Quintiles data was located in multiple incompatible legacy systems. This meant there was extensive manual data manipulation before data could become useful. As well, incompatible legacy systems impeded data integration and normalization, and prohibited a holistic view across all sources. Making matters worse, management felt that it lacked the ability to take corrective actions in a timely manner.
Infosario launched to manage Quintiles analytical challenges
To address these challenges, Quintiles leadership launched the Infosario Clinical Data Management Platform to power its pharmaceutical product development process. Infosario breaks down the silos of information that have limited combining massive quantities of scientific and operational data collected during clinical development with tens of millions of real-world patient records and population data. This step empowered researchers and drug developers to unlock a holistic view of data. This improved decision-making, and ultimately increasing the probability of success at every step in a product’s lifecycle. Quintiles Chief Information Officer, Richard Thomas says, “The drug development process is predicated upon the availability of high quality data with which to collaborate and make informed decisions during the evolution of a product or treatment”.
What Quintiles has succeeded in doing with Infosario is the integration of data and processes associated with a drug’s lifecycle. This includes creating a data engine to collect, clean, and prepare data for analysis. The data is then combined with clinical research data and information from other sources to provide a set of predictive analytics. This of course is aimed at impacting business outcomes.
The Infosario solution consists of several core elements
At its core, Infosario provides the data integration and data quality capabilities for extracting and organizing clinical and operational data. The approach combines and harmonizes data from multiple heterogeneous sources into what is called the Infosario Data Factory repository. The end is to accelerate reporting. Infosario leverages data federation /virtualization technologies to acquire information from disparate sources in a timely manner without affecting the underlying foundational enterprise data warehouse. As well, it implements a rule-based, real-time intelligent monitoring and alerting to enable the business to tweak and enhance business processes as they are needed. A “monitoring and alerting layer” sits on top of the data, with the facility to rapidly provide intelligent alerts to appropriate stakeholders regarding trial-related issues and milestone events. Here are some more specifics on the components of the Infosario solution:
• Data Mastering provides the capability to link multi-domains of data. This enables enterprise information assets to be actively managed, with an integrated view of the hierarchies and relationships.
• Data Management provides the high performance, scalable data integration needed to support enterprise data warehouses and critical operational data stores.
• Data Services provides the ability to combine data from multiple heterogeneous data sources into a single virtualized view. This allows Infosario to utilize data services to accelerate delivery of needed information.
• Complex Event Processing manages the critical task of monitoring enterprise data quality events and delivering alerts to key stakeholders to take necessary action.
According to Richard Thomas, “the drug development process rests on the high quality data being used to make informed decisions during the evolution of a product or treatment. Quintiles’ Infosario clinical data management platform gives researchers and drug developers with the knowledge needed to improve decision-making and ultimately increase the probability of success at every step in a product’s lifecycle.” This it enables enhanced data accuracy, timeliness, and completeness. On the business side, it has enables Quintiles to establish industry-leading information and insight. And this in turn has enables the ability to make faster, more informed decisions, and to take action based on insights. This importantly has led to a faster time to market and a lengthening of the period of patent protection.
Analytics Stories: A Banking Case Study
Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”
Solution Brief: The Intelligent Data Platform
Author Twitter: @MylesSuer
At the DIA conference in Berlin this month, Frits Stulp of Mesa Arch Consulting suggested that IDMP could get the business asking for MDM. After looking at the requirements for IDMP compliance for approximately a year, his conclusion from a business point of view is that MDM has a key role to play in IDMP compliance. A recent press release by Andrew Marr, an IDMP and XEVMPD expert and specialist consultant, also shows support for MDM being ‘an advantageous thing to do’ for IDMP compliance. A previous blog outlined my thoughts on why MDM can turn regulatory compliance into an opportunity, instead of a cost. It seems that others are now seeing this opportunity too.
So why will IDMP enable the business (primarily regulatory affairs) to come to the conclusion that they need MDM? At its heart, IDMP is a pharmacovigilance initiative which has a goal to uniquely identify all medicines globally, and have rapid access to the details of the medicine’s attributes. If implemented in its ideal state, IDMP will deliver a single, accurate and trusted version of a medicinal product which can be used for multiple analytical and procedural purposes. This is exactly what MDM is designed to do.
Here is a summary of the key reasons why an MDM-based approach to IDMP is such a good fit.
1. IDMP is a data Consolidation effort; MDM enables data discovery & consolidation
- IDMP will probably need to populate between 150 to 300 attributes per medicine
- These attributes will be held in 10 to 13 systems, per product.
- MDM (especially with close coupling to Data Integration) can easily discover and collect this data.
2. IDMP requires cross-referencing; MDM has cross-referencing and cleansing as key process steps.
- Consolidating data from multiple systems normally means dealing with multiple identifiers per product.
- Different entities must be linked to each other to build relationships within the IDMP model.
- MDM allows for complex models catering for multiple identifiers and relationships between entities.
3. IDMP submissions must ensure the correct value of an attribute is submitted; MDM has strong capabilities to resolve different attribute values.
- Many attributes will exist in more than one of the 10 to 13 source systems
- Without strong data governance, these values can (and probably will be) different.
- MDM can set rules for determining the ‘golden source’ for each attribute, and then track the history of these values used for submission.
4. IDMP is a translation effort; MDM is designed to translate
- Submission will need to be within a defined vocabulary or set of reference data
- Different regulators may opt for different vocabularies, in addition to the internal set of reference data.
- MDM can hold multiple values/vocabularies for entities, depending on context.
5. IDMP is a large co-ordination effort; MDM enables governance and is generally associated with higher data consistency and quality throughout an organisation.
- The IDMP scope is broad, so attributes required by IDMP may also be required for compliance to other regulations.
- Accurate compliance needs tracking and distribution of attribute values. Attribute values submitted for IDMP, other regulations, and supporting internal business should be the same.
- Not only is MDM designed to collect and cleanse data, it is equally comfortable for data dispersion and co-ordination of values across systems.
Once business users assess the data management requirements, and consider the breadth of the IDMP scope, it is no surprise that some of them could be asking for a MDM solution. Even if they do not use the acronym ‘MDM’ they could actually be asking for MDM by capabilities rather than name.
Given the good technical fit of a MDM approach to IDMP compliance, I would like to put forward three arguments as to why the approach makes sense. There may be others, but these are the ones I feel are most compelling:
1. Better chance to meet tight submission time
There is slightly over 18 months left before the EMA requires IDMP compliance. Waiting for final guidance will not provide enough time for compliance. Using MDM you have a tool to begin with the most time consuming tasks: data discovery, collection and consolidation. Required XEVMPD data, and the draft guidance can serve as a guide as to where to focus your efforts.
2. Reduce Risk of non-compliance
With fines in Europe of ‘fines up to 5% of revenue’ at stake, risking non-compliance could be expensive. Not only will MDM increase your chance of compliance on July 1, 2016, but will give you a tool to manage your data to ensure ongoing compliance in terms of meeting deadlines for delivering new data, and data changes.
3. Your company will have a ready source of clean, multi-purpose product data
Unlike some Regulatory Information Management tools, MDM is not a single-purpose tool. It is specifically designed to provide consolidated, high-quality master data to multiple systems and business processes. This data source could be used to deliver high-quality data to multiple other initiatives, in particular compliance to other regulations, and projects addressing topics such as Traceability, Health Economics & Outcomes, Continuous Process Verification, Inventory Reduction.
So back to the original question – will the introduction of IDMP regulation in Europe result in the business asking IT to implement MDM? Perhaps they will, but not by name. It is still possible that they won’t. However, for those of you who have been struggling to get buy-in to MDM within your organisation, and you need to comply to IDMP, then you may be able to find some more allies (potentially with an approved budget) to support you in your MDM efforts.
The other day I ran across an article on CMO.com from a few months ago entitled “Total Customer Value Trumps Simple Loyalty in Digital World”. It’s a great article, so I encourage you to go take a look, but the basic premise is that loyalty does not necessarily equal value in today’s complicated consumer environment.
Customers can be loyal for a variety of reasons as the author Samuel Greengard points out. One of which may be that they are stuck with a certain product or service because they believe there is no better alternative available. I know I can relate to this after a recent series of less-than-pleasant experiences with my bank. I’d like to change banks, but frankly they’re all about the same and it just isn’t worth the hassle. Therefore, I’m loyal to my unnamed bank, but definitely not an advocate.
The proverbial big fish in today’s digital world, according to the author, are customers who truly identify with the brand and who will buy the company’s products eagerly, even when viable alternatives exist. These are the customers who sing the brand’s praises to their friends and family online and in person. These are the customers who write reviews on Amazon and give your product 5 stars. These are the customers who will pay markedly more just because it sports your logo. And these are the customers whose voices hold weight with their peers because they are knowledgeable and passionate about the product. I’m sure we all have a brand or two that we’re truly passionate about.
Total Customer Value in the Pool
My 13 year old son is a competitive swimmer and will only use Speedo goggles – ever – hands down – no matter what. He wears Speedo t-shirts to show his support. He talks about how great his goggles are and encourages his teammates to try on his personal pair to show them how much better they are. He is a leader on his team, so when newbies come in and see him wearing these goggles and singing their praises, and finishing first, his advocacy holds weight. I’m sure we have owned well over 30 pair of Speedo goggles over the past 4 years at $20 a pop – and add in the T-Shirts and of course swimsuits – we probably have a historical value of over $1000 and a potential lifetime value of tens of thousands (ridiculous I know!). But if you add in the influence he’s had over others, his value is tremendously more – at least 5X.
This is why data is king!
I couldn’t agree more that total customer value, or even total partner or total supplier value, is absolutely the right approach, and is a much better indicator of value. But in this digital world of incredible data volumes and disparate data sources & systems, how can you really know what a customer’s value is?
The marketing applications you probably already use are great – there are so many great automation, web analytics, and CRM systems around. But what fuels these applications? Your data.
Most marketers think that data is the stuff that applications generate or consume. As if all data is pretty much the same. In truth, data is a raw ingredient. Data-driven marketers don’t just manage their marketing applications, they actively manage their data as a strategic asset.
How are you using data to analyze and identify your influential customers? Can you tell that a customer bought their fourth product from your website, and then promptly tweeted about the great deal they got on it? Even more interesting, can you tell that that five of their friends followed the link, 1 bought the same item, 1 looked at it but ended up buying a similar item, and 1 put it in their cart but didn’t buy it because it was cheaper on another website? And more importantly, how can you keep this person engaged so they continue their brand preference – so somebody else with a similar brand and product doesn’t swoop in and do it first? And the ultimate question… how can you scale this so that you’re doing this automatically within your marketing processes, with confidence, every time?
All marketers need to understand their data – what exists in your information ecosystem , whether it be internally or externally. Can you even get to the systems that hold the richest data? Do you leverage your internal customer support/call center records? Is your billing /financial system utilized as a key location for customer data? And the elephant in the room… can you incorporate the invaluable social media data that is ripe for marketers to leverage as an automated component of their marketing campaigns?
This is why marketers need to care about data integration…
Even if you do have access to all of the rich customer data that exists within and outside of your firewalls, how can you make sense of it? How can you pull it together to truly understand your customers… what they really buy, who they associate with, and who they influence. If you don’t, then you’re leaving dollars, and more importantly, potential advocacy and true customer value, on the table.
This is why marketers need to care about achieving a total view of their customers and prospects…
And none of this matters if the data you are leveraging is plain incorrect or incomplete. How often have you seen some analysis on an important topic, had that gut feeling that something must be wrong, and questioned the data that was used to pull the report? The obvious data quality errors are really only the tip of the iceberg. Most of the data quality issues that marketers face are either not glaringly obvious enough to catch and correct on the spot, or are baked into an automated process that nobody has the opportunity to catch. Making decisions based upon flawed data inevitably leads to poor decisions.
This is why marketers need to care about data quality.
So, as the article points out, don’t just look at loyalty, look at total customer value. But realize, that this is easier said than done without a focusing in on your data and ensuring you have all of the right data, at the right place, in the right format, right away.
Now… Brand advocates, step up! Share with us your favorite story. What brands do you love? Why? What makes you so loyal?
From this analysis in “What’s Reasonable Security? A Moving Target,” IAPP extrapolated the best practices from the FTC’s enforcement actions.
While the white paper and article indicate that “reasonable security” is a moving target it does provide recommendations that will help organizations access and baseline their current data security efforts. Interesting is the focus on data centric security, from overall enterprise assessment to the careful control of access of employees and 3rd parties. Here some of the recommendations derived from the FTC’s enforcements that call for Data Centric Security:
- Perform assessments to identify reasonably foreseeable risks to the security, integrity, and confidentiality of personal information collected and stored on the network, online or in paper files.
- Limited access policies curb unnecessary security risks and minimize the number and type of network access points that an information security team must monitor for potential violations.
- Limit employee access to (and copying of) personal information, based on employee’s role.
- Implement and monitor compliance with policies and procedures for rendering information unreadable or otherwise secure in the course of disposal. Securely disposed information must not practicably be read or reconstructed.
- Restrict third party access to personal information based on business need, for example, by restricting access based on IP address, granting temporary access privileges, or similar procedures.
How does Data Centric Security help organizations achieve this inferred baseline?
- Data Security Intelligence (Secure@Source coming Q2 2015), provides the ability to “…identify reasonably foreseeable risks.”
- Data Masking (Dynamic and Persistent Data Masking) provides the controls to limit access of information to employees and 3rd parties.
- Data Archiving provides the means for the secure disposal of information.
Other data centric security controls would include encryption for data at rest/motion and tokenization for securing payment card data. All of the controls help organizations secure their data, whether a threat originates internally or externally. And based on the never ending news of data breaches and attacks this year, it is a matter of when, not if your organization will be significantly breached.
For 2015, “Reasonable Security” will require ongoing analysis of sensitive data and the deployment of reciprocal data centric security controls to ensure that the organizations keep pace with this “Moving Target.”
I ended my previous blog wondering if awareness of Data Gravity should change our behavior. While Data Gravity adds Value to Big Data, I find that the application of the Value is under explained.
Exponential growth of data has naturally led us to want to categorize it into facts, relationships, entities, etc. This sounds very elementary. While this happens so quickly in our subconscious minds as humans, it takes significant effort to teach this to a machine.
A friend tweeted this to me last week: I paddled out today, now I look like a lobster. Since this tweet, Twitter has inundated my friend and me with promotions from Red Lobster. It is because the machine deconstructed the tweet: paddled <PROPULSION>, today <TIME>, like <PREFERENCE> and lobster <CRUSTACEANS>. While putting these together, the machine decided that the keyword was lobster. You and I both know that my friend was not talking about lobsters.
You may think that this maybe just a funny edge case. You can confuse any computer system if you try hard enough, right? Unfortunately, this isn’t an edge case. 140 characters has not just changed people’s tweets, it has changed how people talk on the web. More and more information is communicated in smaller and smaller amounts of language, and this trend is only going to continue.
When will the machine understand that “I look like a lobster” means I am sunburned?
I believe the reason that there are not hundreds of companies exploiting machine-learning techniques to generate a truly semantic web, is the lack of weighted edges in publicly available ontologies. Keep reading, it will all make sense in about 5 sentences. Lobster and Sunscreen are 7 hops away from each other in dbPedia – way too many to draw any correlation between the two. For that matter, any article in Wikipedia is connected to any other article within about 14 hops, and that’s the extreme. Completed unrelated concepts are often just a few hops from each other.
But by analyzing massive amounts of both written and spoken English text from articles, books, social media, and television, it is possible for a machine to automatically draw a correlation and create a weighted edge between the Lobsters and Sunscreen nodes that effectively short circuits the 7 hops necessary. Many organizations are dumping massive amounts of facts without weights into our repositories of total human knowledge because they are naïvely attempting to categorize everything without realizing that the repositories of human knowledge need to mimic how humans use knowledge.
For example – if you hear the name Babe Ruth, what is the first thing that pops to mind? Roman Catholics from Maryland born in the 1800s or Famous Baseball Player?
If you look in Wikipedia today, he is categorized under 28 categories in Wikipedia, each of them with the same level of attachment. 1895 births | 1948 deaths | American League All-Stars | American League batting champions | American League ERA champions | American League home run champions | American League RBI champions | American people of German descent | American Roman Catholics | Babe Ruth | Baltimore Orioles (IL) players | Baseball players from Maryland | Boston Braves players | Boston Red Sox players | Brooklyn Dodgers coaches | Burials at Gate of Heaven Cemetery | Cancer deaths in New York | Deaths from esophageal cancer | Major League Baseball first base coaches | Major League Baseball left fielders | Major League Baseball pitchers | Major League Baseball players with retired numbers | Major League Baseball right fielders | National Baseball Hall of Fame inductees | New York Yankees players | Providence Grays (minor league) players | Sportspeople from Baltimore | Maryland | Vaudeville performers.
Now imagine how confused a machine would get when the distance of unweighted edges between nodes is used as a scoring mechanism for relevancy.
If I were to design an algorithm that uses weighted edges (on a scale of 1-5, with 5 being the highest), the same search would yield a much more obvious result.
1895 births | 1948 deaths | American League All-Stars | American League batting champions | American League ERA champions | American League home run champions | American League RBI champions | American people of German descent | American Roman Catholics | Babe Ruth | Baltimore Orioles (IL) players | Baseball players from Maryland | Boston Braves players | Boston Red Sox players | Brooklyn Dodgers coaches | Burials at Gate of Heaven Cemetery | Cancer deaths in New York | Deaths from esophageal cancer | Major League Baseball first base coaches | Major League Baseball left fielders | Major League Baseball pitchers | Major League Baseball players with retired numbers | Major League Baseball right fielders | National Baseball Hall of Fame inductees | New York Yankees players | Providence Grays (minor league) players | Sportspeople from Baltimore | Maryland | Vaudeville performers .
Now the machine starts to think more like a human. The above example forces us to ask ourselves the relevancy a.k.a. Value of the response. This is where I think Data Gravity’s becomes relevant.
You can contact me on twitter @bigdatabeat with your comments.
Citrix: You may not realize you know them, but chances are pretty good that you do. And chances are also good that we marketers can learn something about achieving fortune teller-like marketing from them!
Citrix is the company that brought you GoToMeeting and a whole host of other mobile workspace solutions that provide virtualization, networking and cloud services. Their goal is to give their 100 million users in 260,000 organizations across the globe “new ways to work better with seamless and secure access to the apps, files and services they need on any device, wherever they go.”
Citrix is a company that has been imagining and innovating for over 25 years, and over that time, has seen a complete transformation in their market – virtual solutions and cloud services didn’t even exist when they were founded. Now it’s the backbone of their business. Their corporate video proudly states that the only constant in this world is change, and that they strive to embrace the “yet to be discovered.”
Having worked with them quite a bit over the past few years, we have seen first-hand how Citrix has demonstrated their ability to embrace change.
Back in 2011, it became clear to Citrix that they had a data problem, and that they would have to make some changes to stay ahead in this hyper competitive market. Sales & Marketing had identified data as their #1 concern – their data was incomplete, inaccurate, and duplicated in their CRM system. And with so many different applications in the organization, it was quite difficult to know which application or data source had the most accurate and up-to-date information. They realized they needed a single source of the truth – one system of reference where all of their global data management practices could be centralized and consistent.
The marketing team realized that they needed to take control of the solution to their data concerns, as their success truly depended upon it. They brought together their IT department and their systems integration partner, Cognizant to determine a course of action. Together they forged an overall data governance strategy which would empower the marketing team to manage data centrally – to be responsible for their own success.
As a key element of that data governance / management strategy, they determined that they needed a Master Data Management (MDM) solution to serve as their Single Trusted Source of Customer & Prospect Data. They did a great deal of research into industry best practices and technology solutions, and decided to select Informatica as their MDM partner. As you can see, Citrix’s environment is not unlike most marketing organizations. The difference is that they are now able to capture and distribute better customer and prospect data to and from these systems to achieve even better results. They are leveraging internal data sources and systems like CRM (Salesforce) and marketing automation (Marketo). Their systems live all over the enterprise, both on premises and in the cloud. And they leverage analytical tools to analyze and dashboard their results.
Citrix strategized and implemented their Single Trusted Source of Customer & Prospect solution in a phased approach throughout 2013 and 2014, and we believe that what they’ve been able to accomplish during that short period of time has been nothing short of phenomenal. Here are the higlights:
- Used Informatica MDM to provide clean, consistent and connected channel partner, customer and prospect data and the relationships between them for use in operational applications (SFDC, BI Reporting and Predictive Analytics)
- Recognized 20% increase in lead-to-opportunity conversion rates
- Realized 20% increase in marketing team’s operational efficiency
- Achieved 50% increase in quality of data at the point of entry, and a 50% reduction in the rate of junk and duplicate data for prospects, existing accounts and contact
- Delivered a better channel partner and customer experience by renewing all of a customers’ user licenses across product lines at one time and making it easy to identify whitespace opportunities to up-sell more user licenses
That is huge! Can you imagine the impact on your own marketing organization of a 20% increase in lead-to-opportunity conversion? Can you imagine the impact of spending 20% less time questioning and manually massaging data to get the information you need? That’s game changing!
Because Citrix now has great data and great resulting insight, they have been able to take the next step and embark on new fortune teller-like marketing strategies. As Citrix’s Dagmar Garcia discussed during a recent webinar, “We monitor implicit and explicit behavior of transactional leads and accounts, and then we leverage these insights and previous behaviors to offer net new offers and campaigns to our customers and prospects… And it’s all based on the quality of data we have within our database.”
I encourage you to take a few minutes to listen to Dagmar discuss Citrix’s project on a recent webinar. In the webinar, she dives deeper into their project, the project scope and timeline, and to what she means by “fortune telling abilities”. Also, take a look at the customer story section of the Informatica.com website for the PDF case study. And, if you’re in the mood to learn more, you can download a complimentary copy of the 2014 Gartner Magic Quadrant for MDM of Customer Data Solutions.
Hat’s off to you Citrix, and we look forward to working with you to continue to change the game even more in the coming months and years!
With the increasing importance of enterprise analytics, the question becomes who should own the analytics and data agenda. This question really matters today because, according to Thomas Davenport, “business processes are among the last remaining points of differentiation.” For this reason, Davenport even suggests that businesses that create a sustainable right to win use analytics to “wring every last drop of value from their processes”.
The CFO is the logical choice?
In talking with CIOs about both enterprise analytics and data, they are clear that they do not want to become their company’s data steward. They insist instead that they want to be an enabler of the analytics and data function. So what business function then should own enterprise analytics and data? Last week an interesting answer came from a CFO Magazine Article by Frank Friedman. Frank contends that CFOs are “the logical choice to own analytics and put them to work to serve the organization’s needs”.
To justify his position, Frank made the following claims:
- CFOs own most of the unprecedented quantities of data that businesses create from supply chains, product processes, and customer interactions
- Many CFOs already use analytics to address their organization’s strategic issues
- CFOs uniquely can act as a steward of value and an impartial guardian of truth across the organizations. This fact gives them the credibility and trust needed when analytics produce insights that effectively debunk currently accepted wisdom
Frank contends as well that owning the analytics agenda is a good thing because it allows CFOs to expand their strategic leadership role in doing the following:
- Growing top line revenue
- Strengthening their business ties
- Expanding the CFO’s influence outside the finance function.
Frank suggests as well that analytics empowers the CFO to exercise more centralized control of operational business decision making. The question is what do other CFOs think about Frank’s position?
CFOs clearly have an opinion about enterprise analytics and data
A major Retail CFO says that finance needs to own “the facts for the organization”—the metrics and KPIs. And while he honestly admits that finance organizations in the past have not used data well, he claims finance departments need to make the time to become truly data centric. He said “I do not consider myself a data expert, but finance needs to own enterprise data and the integrity of this data”. This CFO claims as well that “finance needs to use data to make sure that resources are focused on the right things; decisions are based on facts; and metrics are simple and understandable”. A Food and Beverage CFO agrees with the Retail CFO by saying that almost every piece of data is financial in one way or another. CFOs need to manage all of this data since they own operational performance for the enterprise. CFOs should own the key performance indicators of the business.
CIOs should own data, data interconnect, and system selection
A Healthcare CFO said he wants, however, the CIO to own data systems, data interconnect, and system selection. However, he believes that the finance organization is the recipient of data. “CFOs have a major stake in data. CFOs need to dig into operational data to be able to relate operations to internal accounting and to analyze things like costs versus price”. He said that “the CFOs can’t function without good operational data”.
An Accounting Firm CFO agreed with the Healthcare CFO by saying that CIOs are a means to get data. She said that CFOs need to make sense out of data in their performance management role. CFOs, therefore, are big consumers of both business intelligence and analytics. An Insurance CFO concurred by saying CIOs should own how data is delivered.
CFOs should be data validators
The Insurance CFOs said, however, CFOs need to be validators of data and reports. They should, as a result, in his opinion be very knowledgeable on BI and Analytics. In other words, CFOs need to be the Underwriters Laboratory (UL) for corporate data.
Now it is your chance
So the question is what do you believe? Does the CFO own analytics, data, and data quality as a part of their operational performance role? Or is it a group of people within the organization? Please share your opinions below.
Solution Brief: The Intelligent Data Platform
CFOs Move to Chief Profitability Officer
CFOs Discuss Their Technology Priorities
The CFO Viewpoint upon Data
How CFOs can change the conversation with their CIO?
New type of CFO represents a potent CIO ally
Competing on Analytics
The Business Case for Better Data Connectivity
California reported a total of 167 data breaches in 2013, which is up 28 percent from the 2012. Two major data breaches caused most of this uptick, including the Target attack that was reported in December 2013, and the LivingSocial attack that occurred in April 2013. This year, you can add the Home Depot data breach to that list, as well as the recent breach at the US Post Office.
So, what the heck is going on? And how does this new impact data integration? Should we be concerned, as we place more and more data on public clouds, or within big data systems?
Almost all of these breaches were made possible by traditional systems with security technology and security operations that fell far enough behind that outside attackers found a way in. You can count on many more of these attacks, as enterprises and governments don’t look at security as what it is; an ongoing activity that may require massive and systemic changes to make sure the data is properly protected.
As enterprises and government agencies stand up cloud-based systems, and new big data systems, either inside (private) or outside (public) of the enterprise, there are some emerging best practices around security that those who deploy data integration should understand. Here are a few that should be on the top of your list:
First, start with Identity and Access Management (IAM) and work your way backward. These days, most cloud and non-cloud systems are complex distributed systems. That means IAM is is clearly the best security model and best practice to follow with the emerging use of cloud computing.
The concept is simple; provide a security approach and technology that enables the right individuals to access the right resources, at the right times, for the right reasons. The concept follows the principle that everything and everyone gets an identity. This includes humans, servers, APIs, applications, data, etc.. Once that verification occurs, it’s just a matter of defining which identities can access other identities, and creating policies that define the limits of that relationship.
Second, work with your data integration provider to identify solutions that work best with their technology. Most data integration solutions address security in one way, shape, or form. Understanding those solutions is important to secure data at rest and in flight.
Finally, splurge on monitoring and governance. Many of the issues around this growing number of breaches exist with the system managers’ inability to spot and stop attacks. Creative approaches to monitoring system and network utilization, as well as data access, will allow those in IT to spot most of the attacks and correct the issues before the ‘go nuclear.’ Typically, there are an increasing number of breach attempts that lead up to the complete breach.
The issue and burden of security won’t go away. Systems will continue to move to public and private clouds, and data will continue to migrate to distributed big data types of environments. And that means the need data integration and data security will continue to explode.
If you’ve wondered why so many companies are eager to control data storage, the answer can be summed up in a simple term: data gravity. Ultimately, where data is determines where the money is. Services and applications are nothing without it.
Dave McCrory introduced his idea of Data Gravity with a blog post back in 2010. The core idea was – and is – Interesting. More recently, Data Gravity featured in this year’s EMC World keynote. But, beyond the observation that large or valuable agglomerations of data exert a pull that tends to see them grow in size or value, what is a recognition of Data Gravity actually good for?
As a concept, Data Gravity seems closely associated with current enthusiasm for Big Data. In addition, like Big Data, the term’s real-world connotations can be unhelpful almost as often as they are helpful. Big Data exhibits at least three characteristics, which are Volume, Velocity, and Variety. Various other V’s, including Value, is mentioned from time to time, but with less consistency. Yet, Big Data’s name says it’s all about size. The speed with which data must be ingested, processed, or excreted is less important. The complexity and diversity of the data doesn’t matter either.
On its own, the size of a data set is unimportant. Coping with lots of data certainly raises some not-insignificant technical challenges, but the community is actually doing a good job of coming up with technically impressive solutions. The interesting aspect of a huge data set isn’t its size, but the very different modes of working that become possible when you begin to unpick the complex interrelationships between data elements.
Sometimes, Big Data is the vehicle by which enough data is gathered about enough aspects of enough things from enough places for those interrelationships to become observable against the background noise. Other times, Big Data is the background noise, and any hope of insight is drowned beneath the unending stream of petabytes.
To a degree, Data Gravity falls into the same trap. More gravity must be good, right? And more mass leads to more gravity. Mass must be connected to volume, in some vague way that was explained when I was 11, and which involves STP. Therefore, bigger data sets have more gravity. This means that bigger data sets are better data sets. That assertion is clearly nonsense, but luckily, it’s not actually what McCrory is suggesting. His arguments are more nuanced than that, and potentially far more useful.
Instinctively, I like that the equation attempts to move attention away from ‘the application’ toward the pools of data that support many, many applications at once. The data is where the potential lies. Applications are merely the means to unlock that potential in various ways. So maybe notions of Potential Energy from elsewhere in Physics need to figure here.
But I’m wary of the emphasis given to real numbers that are simply the underlying technology’s vital statistics; network latency, bandwidth, request sizes, numbers of requests, and the rest. I realize that these are the measurable things that we have, but feel that more abstract notions of value need to figure just as prominently.
So I’m left reaffirming my original impression that Data Gravity is “interesting”. It’s also intriguing, and I keep feeling that it should be insightful. I’m just not — yet — sure exactly how. Is a resource with a Data Gravity of 6 twice as good as a resource with a Data Gravity of 3? Does a data set with a Data Gravity of 15 require three times as much investment/infrastructure/love as a data set scoring a humble 5? It’s unlikely to be that simple, but I do look forward to seeing what happens as McCrory begins to work with the parts of our industry that can lend empirical credibility to his initial dabbling in mathematics.
If real numbers show the equations to stand up, all we then need to do is work out what the numbers mean. Should an awareness of Data Gravity change our behavior, should it validate what gut feel led us to do already, or is it just another ‘interesting’ and ultimately self-evident number that doesn’t take us anywhere?
I don’t know, but I will continue to explore. You can contact me on twitter @bigdatabeat