Tag Archives: data

The Super Bowl of Data-Driven Marketing Potential

I absolutely love football, so when the Super Bowl came to our hometown Phoenix, it was my paradise!  Football on every.single.channel.  Current and former NFL players were everywhere – I ate breakfast next to Howie Long and pumped gas next to Tony Romo.  ESPN & NFL Network analysts were commentating from blocks away.  Even our downtown was transformed into a giant celebration of football.

People often talk about the “Super Bowl of Marketing”, referring to the advertising extravaganza and the millions of dollars spent on hilarious (and sometimes not) commercials.  But spending so much time immersed in the Super Bowl festivities got me thinking about one of my other fascinations… data!  It was the Super Bowl of data too!

NFL Experience Field GoalOn Sunday morning, before the big game (of the Superb Owl as Stephen Colbert would say), I got to witness first-hand the data-driven marketing potential at the NFL Experience in Downtown Phoenix.  The NFL did an amazing job putting on this event – it was truly exceptional with something for everyone.

Once we purchased our tickets, we decided to take the kids to do some Play 60 activities.  Before they could participate, we were shuttled to a bank of computers to “get a wristband” and to sign a waiver.  I’m sure the lawyers made sure that everyone participating in anything physical wouldn’t sue the NFL or the sponsors if they got a hangnail or twisted ankle. But the data ready marketer in me realized that these wristbands were much more than a liability waiver.  They were also a data treasure map!

To get the wristband, you had to provide the NFL (and their sponsors) with your demographic & contact information, your favorite teams, your children’s names and ages, and give them permission to contact you.  You also received an emailed QR code that you could use to unlock certain activities throughout the Experience.

As we moved around the Experience, they scanned our wristband or QR code at each activity.  So now the NFL knows that we have 3 children and their names and ages.  They now know our two youngest love to play football (because they participated in a flag football Play 60 clinic).  They now know that we are huge Denver Bronco fans and purchased a few new jerseys of our favorite players at their shop (where they again scanned our QR code for a small discount).   They now know we use AT&T wireless and our phone numbers.  They know that our boys really want to improve the speed of NationwideNFLExperiencetheir throws because they went through the Peyton Manning Nationwide arm speed and throw accuracy activity five different times… and that nobody ever got over 35MPH.  And they also now know that none of us will ever become great kickers because we all seriously shanked our field goal tries!  And we happily gave them all our data because they provided us with a meaningful service – a really fun, family experience.

Even better for them, for the first time, I actually logged into the NFL Mobile app and turned on location permissions so that I could get real time alerts to what was going on in the area.  Since I use the app all time, that’s a lot of future data that I’ve now given them.

GMC sponsored the Experience and had a huge space in the main area to show off their new car lineup, and they definitely took full advantage of the data provided.  They held a car giveaway that required you to scan your NFL QR code to start the process, and then answer several questions about your vehicle likes and future purchase plans.  You then had to go around to your favorite three vehicles and answer questions about their amazing features (D all of the above was the answer of course!).  After you visited your favorite vehicles, you took your QR code back to see if you won.  My 13 year old was hopeful that we were going to win him a new Denali, but sadly, we did not!  And sadly for him, had we been fortunate enough to win, he wouldn’t be driving it anyway!

I waited a few days to write this blog because I was hopeful that I would receive some sort of personalized experience from the NFL that would blow my socks off.  I’m not sure what technology the NFL & GMC marketing teams use, and if they are data ready.  If they were though, I would have hoped they already would have engaged me with a personalized experience based on the data I have given them.

GMC has sent me a few emails, one with a photo that was taken green-screen style of my kids.  And yes, I’ve downloaded it and have a photo of them with the GMC logo loud and proud on my desktop.

But other than that, nothing very exciting as of yet, and definitely nothing innovative or engaging yet.  But I truly hope that the NFL & GMC use this data to provide me with a better, personalized experience.  Isn’t that why our consumers freely offer their information?  To receive something of value back.

Here are a few ideas for you NFL:

  • Special discounts on Denver Broncos apparel
  • Alert from the NFL ticket exchange the next time the Broncos play the Cardinals in Arizona, and 5 tickets become available
  • Information about how to sign up for NFL kids clinics
  • Sorry GMC, I’m not quite sure what to suggest because we just bought a new Toyota a few months ago  (but you know that I’m not in the market for a new car right now because I gave you that information too).

Thank you for a really wonderful experience NFL & GMC!  In this age of data-driven personalization, I am anxiously awaiting your next move!  Now, are you ready for some football (sorry couldn’t resist!)?  But in all seriousness, are you ready to reach your data-driven marketing potential.

Will this be the beginning of the Super Bowl of Data Ready Marketing!  As an NFL fan and consumer, I know I’m ready!

Are you ready?  Please tell us in our survey about data ready marketing. The results are coming out soon so don’t miss your chance to be a part. You can find the link here.

Also, follow me on twitter – The Data Ready Marketer (@StephanieABest) for some of the latest & greatest news and insights on the world of data ready marketing.

And stay tuned because we have several new Data Ready Marketing pieces coming out soon – InfoGraphics, eBooks, SlideShares, and more!

Share
Posted in CMO, Data First | Tagged , , , , , , , , , , , | Leave a comment

Is it the CDO or CAO or Someone Else?

Frank-Friedman-199x300A month ago, I shared that Frank Friedman believes CFOs are “the logical choice to own analytics and put them to work to serve the organization’s needs”. Even though many CFOs are increasingly taking on what could be considered an internal CEO or COO role, many readers protested my post which focused on reviewing Frank Friedman’s argument. At the same time, CIOs have been very clear with me that they do not want to personally become their company’s data steward. So the question becomes should companies be creating a CDO or CAO role to lead this important function? And if yes, how common are these two roles anyway?

Data analyticsRegardless of eventual ownership, extracting value out of data is becoming a critical business capability. It is clear that data scientists should not be shoe horned into the traditional business analyst role. Data Scientists have the unique ability to derive mathematical models “for the extraction of knowledge from data “(Data Science for Business, Foster Provost, 2013, pg 2). For this reason, Thomas Davenport claims that data scientists need to be able to network across an entire business and be able to work at the intersection of business goals, constraints, processes, available data and analytical possibilities. Given this, many organizations today are starting to experiment with the notion of having either a chief data officers (CDOs) or chief analytics officers (CAOs). The open questions is should an enterprise have a CDO or a CAO or both? And as important in the end, it is important to determine where should each of these roles report in the organization?

Data policy versus business questions

Data analyticsIn my opinion, it is the critical to first look into the substance of each role before making a decision with regards to the above question. The CDO should be about ensuring that information is properly secured, stored, transmitted or destroyed.  This includes, according to COBIT 5, that there are effective security and controls over information systems. To do this, procedures need to be defined and implemented to ensure the integrity and consistency of information stored in databases, data warehouses, and data archives. According to COBIT 5, data governance requires the following four elements:

  • Clear information ownership
  • Timely, correct information
  • Clear enterprise architecture and efficiency
  • Compliance and security

Data analyticsTo me, these four elements should be the essence of the CDO role. Having said this, the CAO is related but very different in terms of the nature of the role and the business skills require. The CRISP model points out just how different the two roles are. According to CRISP, the CAO role should be focused upon business understanding, data understanding, data preparation, data modeling, and data evaluation. As such the CAO is focused upon using data to solve business problems while the CDO is about protecting data as a business critical asset. I was living in in Silicon Valley during the “Internet Bust”. I remember seeing very few job descriptions and few job descriptions that existed said that they wanted a developer who could also act as a product manager and do some marketing as a part time activity. This of course made no sense. I feel the same way about the idea of combining the CDO and CAO. One is about compliance and protecting data and the other is about solving business problems with data. Peanut butter and chocolate may work in a Reese’s cup but it will not work here—the orientations are too different.

So which business leader should own the CDO and CAO?

Clearly, having two more C’s in the C-Suite creates a more crowded list of corporate officers. Some have even said that this will extended what is called senior executive bloat. And what of course how do these new roles work with and impact the CIO? The answer depends on organization’s culture, of course. However, where there isn’t an executive staff office, I suggest that these roles go to different places. Clearly, many companies already have their CIO function already reporting to finance. Where this is the case, it is important determine whether a COO function is in place. The COO clearly could own the CDO and CAO functions because they have a significant role in improving process processes and capabilities. Where there isn’t a COO function and the CIO reports to the CEO, I think you could have the CDO report to the CIO even though CIOs say they do not want to be a data steward. This could be a third function in parallel the VP of Ops and VP of Apps. And in this case, I would put the CAO report to one of the following:  the CFO, Strategy, or IT. Again this all depends on current organizational structure and corporate culture. Regardless of where it reports, the important thing is to focus the CAO on an enterprise analytics capability.

Related Blogs

Should we still be calling it Big Data?

Is Big Data Destined To Become Small And Vertical?

Big Data Why?

What is big data and why should your business care?

Author Twitter: @MylesSuer

Share
Posted in Big Data, CIO | Tagged , , , , , , | 1 Comment

Garbage In, Garbage Out? Don’t Take Data for Granted in Analytics Initiatives!

Cant trust data_1The verdict is in. Data is now broadly perceived as a source of competitive advantage. We all feel the heat to deliver good data. It is no wonder organizations view Analytics initiatives as highly strategic. But the big question is, can you really trust your data? Or are you just creating pretty visualizations on top of bad data?

We also know there is a shift towards self-service Analytics. But did you know that according to Gartner, “through 2016, less than 10% of self-service BI initiatives will be governed sufficiently to prevent inconsistencies that adversely affect the business”?1 This means that you may actually show up at your next big meeting and have data that contradicts your colleague’s data.  Perhaps you are not working off of the same version of the truth. Maybe you have siloed data on different systems and they are not working in concert? Or is your definition of ‘revenue’ or ‘leads’ different from that of your colleague’s?

So are we taking our data for granted? Are we just assuming that it’s all available, clean, complete, integrated and consistent?  As we work with organizations to support their Analytics journey, we often find that the harsh realities of data are quite different from perceptions. Let’s further investigate this perception gap.

For one, people may assume they can easily access all data. In reality, if data connectivity is not managed effectively, we often need to beg borrow and steal to get the right data from the right person. If we are lucky. In less fortunate scenarios, we may need to settle for partial data or a cheap substitute for the data we really wanted. And you know what they say, the only thing worse than no data is bad data. Right?

Another common misperception is: “Our data is clean. We have no data quality issues”.  Wrong again.  When we work with organizations to profile their data, they are often quite surprised to learn that their data is full of errors and gaps.  One company recently discovered within one minute of starting their data profiling exercise, that millions of their customer records contained the company’s own address instead of the customers’ addresses… Oops.

Another myth is that all data is integrated.  In reality, your data may reside in multiple locations: in the cloud, on premise, in Hadoop and on mainframe and anything in between. Integrating data from all these disparate and heterogeneous data sources is not a trivial task, unless you have the right tools.

And here is one more consideration to mull over. Do you find yourself manually hunting down and combining data to reproduce the same ad hoc report over and over again? Perhaps you often find yourself doing this in the wee hours of the night? Why reinvent the wheel? It would be more productive to automate the process of data ingestion and integration for reusable and shareable reports and Analytics.

Simply put, you need great data for great Analytics. We are excited to host Philip Russom of TDWI in a webinar to discuss how data management best practices can enable successful Analytics initiatives. 

And how about you?  Can you trust your data?  Please join us for this webinar to learn more about building a trust-relationship with your data!

  1. Gartner Report, ‘Predicts 2015: Power Shift in Business Intelligence and Analytics Will Fuel Disruption’; Authors: Josh Parenteau, Neil Chandler, Rita L. Sallam, Douglas Laney, Alan D. Duncan; Nov 21 2014
Share
Posted in Architects, Business/IT Collaboration, Data Governance, Data Integration, Data Warehousing | Tagged , , , , , , | 1 Comment

Marketers, Are You Ready? The Impending Data Explosion from the New Gizmos and Gadgets Unveiled at CES

What Marketers Can Learn from CES

What Marketers Can Learn from CES

This is the first year in a very long time that I wasn’t in Las Vegas during CES.  Although it’s not quite as exciting as actually being there, I love that the Twitter-verse and industry news sites kept us all up to date about the latest and greatest announcements.  Now that CES2015 is all wrapped up, I find myself thinking about the potential of some very interesting announcements – from the wild to the wonderful to the leave-you-wondering!  What strikes me isn’t how useful these new gizmos and gadgets will likely be to myself and my consumer counterparts, but instead what incredible new data sources they will offer to my fellow marketers.

One thing is for sure… the connected “Internet of Things” is indeed here.  It’s no longer just a vision.  Sure, we’re just seeing the early stages, but it’s becoming more and more main stream by the day.  And as marketers, we have so much opportunity ahead of us!

I ran across an interesting video interview on the CES show floor with Jack Smith from GroupM on Adweek.com.  Jack says that “data from sensors will have a bigger impact, longer term, than the Internet itself.”  That is a lofty statement, and I’m not sure I’ll go quite that far yet, but I absolutely agree with his premise… this new world of connectivity is already shifting marketing, and it will almost certainly radically change the way we market in the near future.

Riding the Data Explosion (Literally)

Connected CycleThe Connected Cycle is one of the announcements that I find intriguing as a marketer. In short, it’s a bike pedal equipped with GPS and GPRS sensors that “monitor your movements and act as a basic fitness tracker.”  It’s being positioned as a way to track stolen bicycles, which is a massive problem in Europe particularly, with the side benefit of being a powerful fitness tracker.  It may not be as sexy as some other announcements, but I think there is buried treasure in devices like these.

Imagine how powerful that data would be to a sporting goods retailer?  What if the rider of that bicycle had opted into a program that allowed the retailer to track their activity in exchange for highly targeted offers?

Let’s say that the rider is nearing one of your stores and it’s a colder than usual day.  Perhaps you could push them an offer to their smart phone for some neoprene booties.  Or let’s say that, based on their activity patterns, the rider appears to be stepping up their activity and is riding more frequently suggesting they may be ready for a race you are sponsoring in a few months in the area.  Perhaps you could push them an inspirational message saying how great they’re progressing and had they thought about signing up for the big race, with a special incentive of course.

The segmentation possibilities are endless, and the analytics that could be done on the data leaves the data-driven marketer salivating!

Home Automation Meets Business Automation

smart-homeThere were numerous announcements about the connected “house of the future”, and it’s clear that we are just beginning of the home automation wave.  Several of the big dogs like Samsung, Google, and Apple are building or buying automation hub platforms, so it’s going to be easier and easier to connect appliances and other home devices to one another, and also to mobile technology and wearables.   As marketers, there is incredible potential to really tap into this.  Imagine the possibility of interconnecting your customers’ home automation systems with your own marketing automation systems?  Marketers will soon be able literally serve up offers based upon things that are occurring in the home in real time.

Oh no, your teenage son finished off all but the last drop of milk (and put the almost-empty jug back in the fridge without a second thought)!  Not to worry, you’ve linked your refrigerator’s sensor data with your favorite grocery store.  An alert is sent asking if you want more milk, and oh by the way, your shopping patterns indicate you may be running out of your son’s favorite cereal too, so it offers you a special discount if you add a box to your order.  Oh yeah, of course he was complaining about being out just yesterday!  And whala, a gallon of milk and some Cinnamon Toast Crunch magically arrives at your door by the end of the day.  Heck, it will probably arrive within an hour via a drone if Amazon has anything to say about it!  No manual business processes whatsoever.  It’s your appliance’s sensors talking to your customer data warehouse, which is talking to your marketing automation system, which is talking to a mobile app, which is talking to an ordering system, which is talking to a payment system, which is talking to a logistics/delivery system.  That is, of course, if your internal processes are ready!

Some of the More Weird and Wacky, But There May Just Be Something…

Smart MirrorPanasonic’s Smart Mirror allows you to analyze your skin and allows you to visualize yourself with different makeup or even a different haircut.  Cosmetics and hair care companies should be all over this.  Imagine the possibilities of visualizing yourself looking absolutely stunning – if only virtually – with perfect makeup and hair.  Who wouldn’t want to rush right out and capture the look for real?   What if a store front could virtually put the passer-byer in their products, and once the customer is inside the store, point them to the products that were featured?  Take it a step further and send them a special offer the next week to come back buy the hat that just goes perfectly with the rest of the outfit.  It all sounds a little bit “Minority Report-esque”, but it’s closer to becoming true every day.  The power of the interconnected world is endless for the marketer.

BeltyAnd then there’s Belty…  it’s definitely garnered a lot of news (and snarky comments too!).  Belty is a smart belt that slims or expands based upon your waist size at that very moment – whether you’re sitting, standing, or just had a too-large meal.  I don’t see Belty taking off, but you never know!  If it does however, can’t you just see Belty sending a message to your Weight Watchers app about needing to get back on diet?  Or better yet, pointing you to the Half Yearly Sale at Nordstrom because you’re getting too skinny for your pants?

The “Internet of Things” is Becoming Reality… Is Your Marketing Team Ready?

The internet of things is already changing the way consumers live, and it’s beginning to change the way marketers market.  With the It is critical that marketers are thinking about how they can leverage the new devices and the data they provide.  Connecting the dots between devices can become a marketer’s best friend (if they’re ready), or worst enemy (if they’re not).

Are you ready?  Ask yourself these 6 questions:

  1. Are your existing business applications connected to one another?   Do your marketing systems “talk” to your finance systems and your sales systems and your customer support systems?
  2. Do you have fist-class data quality and validation technology and practices in place?  Real-time, automated processes will only amplify data quality problems.
  3. Can you connect easily to any new data source as it becomes available, no matter where it lives and no matter what format it is in?  The only constant in this new world is the speed of change, so if you’re not building processes and leveraging technologies that can keep up, you’re already missing the boat!
  4. Are you building real time capabilities into your processes and technologies?  You systems are going to have to handle real-time sensor data, and make real-time decisions based on the data they provide.
  5. Are your marketing analytics capabilities leading the pack or just getting out of the gate?  Are they harnessing all of the rich data available within your organization today?  Are you ready to analyze all of the new data sources to determine trends and segment for maximum effect?
  6. Are you talking to your counterparts in IT, logistics, finance, etc. about the business processes and technologies you are going to need to harness the data that the interconnected world of today, and of the near future?  If not, don’t wait!  Begin that conversation ASAP!

Informatica is ready to help you embark on this new and exciting data journey.  For some additional perspectives from Informatica on the technologies announced at CES2015, I encourage you to read some of my colleagues’ recent blog posts:

Jumping on the Internet of Things (IoT) Band Wagon?

CES, Digital Strategy and Architecture: Are You Ready?

Share
Posted in Business Impact / Benefits, CMO, Real-Time | Tagged , , , , , | Leave a comment

Analytics Stories: A Case Study from Quintiles

Pharma CIOAs I have shared within other posts within this series, businesses are using analytics to improve their internal and external facing business processes and to strengthen their “right to win” within the markets that they operate. For pharmaceutical businesses, strengthening the right to win begins and ends with the drug product development lifecycle. I remember, for example, talking several years ago to the CFO of major pharmaceutical company and having him tell me the most important financial metrics for him had to do with reducing the time to market for a new drug and maximizing the period of patent protection. Clearly, the faster a pharmaceutical company gets a product to market, the faster it can begin to earning a return on its investment.

Fragmented data challenged analytical efforts

PharmaceuticalAt Quintiles, what the business needed was a system with the ability to optimize design, execution, quality, and management of clinical trials. Management’s goal was to dramatically shorten time to complete each trial, including quickly identifying when a trial should be terminated. At the same time, management wanted to continuously comply with regulatory scrutiny from Federal Drug Administration and use it to proactively monitor and manage notable trial events.

The problem was Quintiles data was fragmented across multiple systems and this delayed the ability to make business decisions. Like many organizations, Quintiles data was located in multiple incompatible legacy systems. This meant there was extensive manual data manipulation before data could become useful. As well, incompatible legacy systems impeded data integration and normalization, and prohibited a holistic view across all sources. Making matters worse, management felt that it lacked the ability to take corrective actions in a timely manner.

Infosario launched to manage Quintiles analytical challenges

PharmaceuticalTo address these challenges, Quintiles leadership launched the Infosario Clinical Data Management Platform to power its pharmaceutical product development process. Infosario breaks down the silos of information that have limited combining massive quantities of scientific and operational data collected during clinical development with tens of millions of real-world patient records and population data. This step empowered researchers and drug developers to unlock a holistic view of data. This improved decision-making, and ultimately increasing the probability of success at every step in a product’s lifecycle. Quintiles Chief Information Officer, Richard Thomas says, “The drug development process is predicated upon the availability of high quality data with which to collaborate and make informed decisions during the evolution of a product or treatment”.

What Quintiles has succeeded in doing with Infosario is the integration of data and processes associated with a drug’s lifecycle. This includes creating a data engine to collect, clean, and prepare data for analysis. The data is then combined with clinical research data and information from other sources to provide a set of predictive analytics. This of course is aimed at impacting business outcomes.

The Infosario solution consists of several core elements

At its core, Infosario provides the data integration and data quality capabilities for extracting and organizing clinical and operational data. The approach combines and harmonizes data from multiple heterogeneous sources into what is called the Infosario Data Factory repository. The end is to accelerate reporting. Infosario leverages data federation /virtualization technologies to acquire information from disparate sources in a timely manner without affecting the underlying foundational enterprise data warehouse. As well, it implements a rule-based, real-time intelligent monitoring and alerting to enable the business to tweak and enhance business processes as they are needed. A “monitoring and alerting layer” sits on top of the data, with the facility to rapidly provide intelligent alerts to appropriate stakeholders regarding trial-related issues and milestone events. Here are some more specifics on the components of the Infosario solution:

• Data Mastering provides the capability to link multi-domains of data. This enables enterprise information assets to be actively managed, with an integrated view of the hierarchies and relationships.

• Data Management provides the high performance, scalable data integration needed to support enterprise data warehouses and critical operational data stores.

• Data Services provides the ability to combine data from multiple heterogeneous data sources into a single virtualized view. This allows Infosario to utilize data services to accelerate delivery of needed information.

• Complex Event Processing manages the critical task of monitoring enterprise data quality events and delivering alerts to key stakeholders to take necessary action.

Parting Thoughts

According to Richard Thomas, “the drug development process rests on the high quality data being used to make informed decisions during the evolution of a product or treatment. Quintiles’ Infosario clinical data management platform gives researchers and drug developers with the knowledge needed to improve decision-making and ultimately increase the probability of success at every step in a product’s lifecycle.” This it enables enhanced data accuracy, timeliness, and completeness. On the business side, it has enables Quintiles to establish industry-leading information and insight. And this in turn has enables the ability to make faster, more informed decisions, and to take action based on insights. This importantly has led to a faster time to market and a lengthening of the period of patent protection.

Related links

Related Blogs

Analytics Stories: A Banking Case Study
Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform

Author Twitter: @MylesSuer

Share
Posted in CIO, Data Governance, Data Quality | Tagged , , , , | Leave a comment

Business Asking IT to Install MDM: Will IDMP make it happen?

Will IDMP Increase MDM Adoption?

Will IDMP Increase MDM Adoption?

MDM for years has been a technology struggling for acceptance.  Not for any technical reason, or in fact any sound business reason.  Quite simply, in many cases the business people cannot attribute value delivery directly to MDM, so MDM projects can be rated as ‘low priority’.  Although the tide is changing, many business people still need help in drawing a direct correlation between Master Data Management as a concept and a tool, and measurable business value.  In my experience having business people actively asking for a MDM project is a rare occurrence.  This should change as the value of MDM is becoming clearer, it is certainly gaining acceptance in principle that MDM will deliver value.  Perhaps this change is not too far off – the introduction of Identification of Medicinal Products (IDMP) regulation in Europe may be a turning point.

At the DIA conference in Berlin this month, Frits Stulp of Mesa Arch Consulting suggested that IDMP could get the business asking for MDM.  After looking at the requirements for IDMP compliance for approximately a year, his conclusion from a business point of view is that MDM has a key role to play in IDMP compliance.  A recent press release by Andrew Marr, an IDMP and XEVMPD expert and  specialist consultant, also shows support for MDM being ‘an advantageous thing to do’  for IDMP compliance.  A previous blog outlined my thoughts on why MDM can turn regulatory compliance into an opportunity, instead of a cost.  It seems that others are now seeing this opportunity too.

So why will IDMP enable the business (primarily regulatory affairs) to come to the conclusion that they need MDM?  At its heart, IDMP is a pharmacovigilance initiative which has a goal to uniquely identify all medicines globally, and have rapid access to the details of the medicine’s attributes.  If implemented in its ideal state, IDMP will deliver a single, accurate and trusted version of a medicinal product which can be used for multiple analytical and procedural purposes.  This is exactly what MDM is designed to do.

Here is a summary of the key reasons why an MDM-based approach to IDMP is such a good fit.

1.  IDMP is a data Consolidation effort; MDM enables data discovery & consolidation

  • IDMP will probably need to populate between 150 to 300 attributes per medicine
  • These attributes will be held in 10 to 13 systems, per product.
  • MDM (especially with close coupling to Data Integration) can easily discover and collect this data.

2.  IDMP requires cross-referencing; MDM has cross-referencing and cleansing as key process steps.          

  • Consolidating data from multiple systems normally means dealing with multiple identifiers per product.
  • Different entities must be linked to each other to build relationships within the IDMP model.
  • MDM allows for complex models catering for multiple identifiers and relationships between entities.

3.  IDMP submissions must ensure the correct value of an attribute is submitted; MDM has strong capabilities to resolve different attribute values.

  • Many attributes will exist in more than one of the 10 to 13 source systems
  • Without strong data governance, these values can (and probably will be) different.
  • MDM can set rules for determining the ‘golden source’ for each attribute, and then track the history of these values used for submission.

4.  IDMP is a translation effort; MDM is designed to translate

  • Submission will need to be within a defined vocabulary or set of reference data
  • Different regulators may opt for different vocabularies, in addition to the internal set of reference data.
  • MDM can hold multiple values/vocabularies for entities, depending on context.

5.  IDMP is a large co-ordination effort; MDM enables governance and is generally associated with higher data consistency and quality throughout an organisation.

  • The IDMP scope is broad, so attributes required by IDMP may also be required for compliance to other regulations.
  • Accurate compliance needs tracking and distribution of attribute values.  Attribute values submitted for IDMP, other regulations, and supporting internal business should be the same.
  • Not only is MDM designed to collect and cleanse data, it is equally comfortable for data dispersion and co-ordination of values across systems.

 Once business users assess the data management requirements, and consider the breadth of the IDMP scope, it is no surprise that some of them could be asking for a MDM solution.  Even if they do not use the acronym ‘MDM’ they could actually be asking for MDM by capabilities rather than name.

Given the good technical fit of a MDM approach to IDMP compliance, I would like to put forward three arguments as to why the approach makes sense.  There may be others, but these are the ones I feel are most compelling:

1.  Better chance to meet tight submission time

There is slightly over 18 months left before the EMA requires IDMP compliance.  Waiting for final guidance will not provide enough time for compliance.  Using MDM you have a tool to begin with the most time consuming tasks:  data discovery, collection and consolidation.  Required XEVMPD data, and the draft guidance can serve as a guide as to where to focus your efforts.

2.  Reduce Risk of non-compliance

With fines in Europe of ‘fines up to 5% of revenue’ at stake, risking non-compliance could be expensive.  Not only will MDM increase your chance of compliance on July 1, 2016, but will give you a tool to manage your data to ensure ongoing compliance in terms of meeting deadlines for delivering new data, and data changes.

3.  Your company will have a ready source of clean, multi-purpose product data

Unlike some Regulatory Information Management tools, MDM is not a single-purpose tool.  It is specifically designed to provide consolidated, high-quality master data to multiple systems and business processes.  This data source could be used to deliver high-quality data to multiple other initiatives, in particular compliance to other regulations, and projects addressing topics such as Traceability, Health Economics & Outcomes, Continuous Process Verification, Inventory Reduction.

So back to the original question – will the introduction of IDMP regulation in Europe result in the business asking IT to implement MDM?  Perhaps they will, but not by name.  It is still possible that they won’t.  However, for those of you who have been struggling to get buy-in to MDM within your organisation, and you need to comply to IDMP, then you may be able to find some more allies (potentially with an approved budget) to support you in your MDM efforts.

Share
Posted in Data Governance, Healthcare, Master Data Management | Tagged , | Leave a comment

Swim Goggles, Great Data, and Total Customer Value

Total Customer Value

Total Customer Value on CMO.com

The other day I ran across an article on CMO.com from a few months ago entitled “Total Customer Value Trumps Simple Loyalty in Digital World”.  It’s a great article, so I encourage you to go take a look, but the basic premise is that loyalty does not necessarily equal value in today’s complicated consumer environment.

Customers can be loyal for a variety of reasons as the author Samuel Greengard points out.  One of which may be that they are stuck with a certain product or service because they believe there is no better alternative available. I know I can relate to this after a recent series of less-than-pleasant experiences with my bank. I’d like to change banks, but frankly they’re all about the same and it just isn’t worth the hassle.  Therefore, I’m loyal to my unnamed bank, but definitely not an advocate.

The proverbial big fish in today’s digital world, according to the author, are customers who truly identify with the brand and who will buy the company’s products eagerly, even when viable alternatives exist.  These are the customers who sing the brand’s praises to their friends and family online and in person.  These are the customers who write reviews on Amazon and give your product 5 stars.  These are the customers who will pay markedly more just because it sports your logo.  And these are the customers whose voices hold weight with their peers because they are knowledgeable and passionate about the product.  I’m sure we all have a brand or two that we’re truly passionate about.

Total Customer Value in the Pool

Total Customer Value

Total Customer Value in the Pool

My 13 year old son is a competitive swimmer and will only use Speedo goggles – ever – hands down – no matter what.  He wears Speedo t-shirts to show his support.  He talks about how great his goggles are and encourages his teammates to try on his personal pair to show them how much better they are.  He is a leader on his team, so when newbies come in and see him wearing these goggles and singing their praises, and finishing first, his advocacy holds weight.  I’m sure we have owned well over 30 pair of Speedo goggles over the past 4 years at $20 a pop – and add in the T-Shirts and of course swimsuits – we probably have a historical value of over $1000 and a potential lifetime value of tens of thousands (ridiculous I know!).  But if you add in the influence he’s had over others, his value is tremendously more – at least 5X.

This is why data is king!

I couldn’t agree more that total customer value, or even total partner or total supplier value, is absolutely the right approach, and is a much better indicator of value.  But in this digital world of incredible data volumes and disparate data sources & systems, how can you really know what a customer’s value is?

The marketing applications you probably already use are great – there are so many great automation, web analytics, and CRM systems around.  But what fuels these applications?  Your data.

Most marketers think that data is the stuff that applications generate or consume. As if all data is pretty much the same.  In truth, data is a raw ingredient.  Data-driven marketers don’t just manage their marketing applications, they actively manage their data as a strategic asset.

Total Customer Value

This is why data is king!

How are you using data to analyze and identify your influential customers?  Can you tell that a customer bought their fourth product from your website, and then promptly tweeted about the great deal they got on it?  Even more interesting, can you tell that that five of their friends followed the link, 1 bought the same item, 1 looked at it but ended up buying a similar item, and 1 put it in their cart but didn’t buy it because it was cheaper on another website?  And more importantly, how can you keep this person engaged so they continue their brand preference – so somebody else with a similar brand and product doesn’t swoop in and do it first?  And the ultimate question… how can you scale this so that you’re doing this automatically within your marketing processes, with confidence, every time?

All marketers need to understand their data – what exists in your information ecosystem , whether it be internally or externally.  Can you even get to the systems that hold the richest data?  Do you leverage your internal customer support/call center records?  Is your billing /financial system utilized as a key location for customer data?  And the elephant in the room… can you incorporate the invaluable social media data that is ripe for marketers to leverage as an automated component of their marketing campaigns?
This is why marketers need to care about data integration

Even if you do have access to all of the rich customer data that exists within and outside of your firewalls, how can you make sense of it?  How can you pull it together to truly understand your customers… what they really buy, who they associate with, and who they influence.  If you don’t, then you’re leaving dollars, and more importantly, potential advocacy and true customer value, on the table.
This is why marketers need to care about achieving a total view of their customers and prospects… 

And none of this matters if the data you are leveraging is plain incorrect or incomplete.  How often have you seen some analysis on an important topic, had that gut feeling that something must be wrong, and questioned the data that was used to pull the report?  The obvious data quality errors are really only the tip of the iceberg.  Most of the data quality issues that marketers face are either not glaringly obvious enough to catch and correct on the spot, or are baked into an automated process that nobody has the opportunity to catch.  Making decisions based upon flawed data inevitably leads to poor decisions.
This is why marketers need to care about data quality.

So, as the article points out, don’t just look at loyalty, look at total customer value.  But realize, that this is easier said than done without a focusing in on your data and ensuring you have all of the right data, at the right place, in the right format, right away.

Now…  Brand advocates, step up!  Share with us your favorite story.  What brands do you love?  Why?  What makes you so loyal?

Share
Posted in Business Impact / Benefits, CMO, Data Integration, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , , , | Leave a comment

Just In Time For the Holidays: How The FTC Defines Reasonable Security

Reasonable Security

How The FTC Defines Reasonable Security

Recently the International Association of Privacy Professionals (IAPP, www.privacyassociation.org ) published a white paper that analyzed the Federal Trade Commission’s (FTC) data security/breach enforcement. These enforcements include organizations from the finance, retail, technology and healthcare industries within the United States.

From this analysis in “What’s Reasonable Security? A Moving Target,” IAPP extrapolated the best practices from the FTC’s enforcement actions.

While the white paper and article indicate that “reasonable security” is a moving target it does provide recommendations that will help organizations access and baseline their current data security efforts.  Interesting is the focus on data centric security, from overall enterprise assessment to the careful control of access of employees and 3rd parties.  Here some of the recommendations derived from the FTC’s enforcements that call for Data Centric Security:

  • Perform assessments to identify reasonably foreseeable risks to the security, integrity, and confidentiality of personal information collected and stored on the network, online or in paper files.
  • Limited access policies curb unnecessary security risks and minimize the number and type of network access points that an information security team must monitor for potential violations.
  • Limit employee access to (and copying of) personal information, based on employee’s role.
  • Implement and monitor compliance with policies and procedures for rendering information unreadable or otherwise secure in the course of disposal. Securely disposed information must not practicably be read or reconstructed.
  • Restrict third party access to personal information based on business need, for example, by restricting access based on IP address, granting temporary access privileges, or similar procedures.

How does Data Centric Security help organizations achieve this inferred baseline? 

  1. Data Security Intelligence (Secure@Source coming Q2 2015), provides the ability to “…identify reasonably foreseeable risks.”
  2. Data Masking (Dynamic and Persistent Data Masking)  provides the controls to limit access of information to employees and 3rd parties.
  3. Data Archiving provides the means for the secure disposal of information.

Other data centric security controls would include encryption for data at rest/motion and tokenization for securing payment card data.  All of the controls help organizations secure their data, whether a threat originates internally or externally.   And based on the never ending news of data breaches and attacks this year, it is a matter of when, not if your organization will be significantly breached.

For 2015, “Reasonable Security” will require ongoing analysis of sensitive data and the deployment of reciprocal data centric security controls to ensure that the organizations keep pace with this “Moving Target.”

Share
Posted in Data Integration, Data masking, Data Privacy, Data Security | Tagged , , , | Leave a comment

Remembering Big Data Gravity – PART 2

I ended my previous blog wondering if awareness of Data Gravity should change our behavior. While Data Gravity adds Value to Big Data, I find that the application of the Value is under explained.

Exponential growth of data has naturally led us to want to categorize it into facts, relationships, entities, etc. This sounds very elementary. While this happens so quickly in our subconscious minds as humans, it takes significant effort to teach this to a machine.

A friend tweeted this to me last week: I paddled out today, now I look like a lobster. Since this tweet, Twitter has inundated my friend and me with promotions from Red Lobster. It is because the machine deconstructed the tweet: paddled <PROPULSION>, today <TIME>, like <PREFERENCE> and lobster <CRUSTACEANS>. While putting these together, the machine decided that the keyword was lobster. You and I both know that my friend was not talking about lobsters.

You may think that this maybe just a funny edge case. You can confuse any computer system if you try hard enough, right? Unfortunately, this isn’t an edge case. 140 characters has not just changed people’s tweets, it has changed how people talk on the web. More and more information is communicated in smaller and smaller amounts of language, and this trend is only going to continue.

When will the machine understand that “I look like a lobster” means I am sunburned?

I believe the reason that there are not hundreds of companies exploiting machine-learning techniques to generate a truly semantic web, is the lack of weighted edges in publicly available ontologies. Keep reading, it will all make sense in about 5 sentences. Lobster and Sunscreen are 7 hops away from each other in dbPedia – way too many to draw any correlation between the two. For that matter, any article in Wikipedia is connected to any other article within about 14 hops, and that’s the extreme. Completed unrelated concepts are often just a few hops from each other.

But by analyzing massive amounts of both written and spoken English text from articles, books, social media, and television, it is possible for a machine to automatically draw a correlation and create a weighted edge between the Lobsters and Sunscreen nodes that effectively short circuits the 7 hops necessary. Many organizations are dumping massive amounts of facts without weights into our repositories of total human knowledge because they are naïvely attempting to categorize everything without realizing that the repositories of human knowledge need to mimic how humans use knowledge.

For example – if you hear the name Babe Ruth, what is the first thing that pops to mind? Roman Catholics from Maryland born in the 1800s or Famous Baseball Player?

data gravityIf you look in Wikipedia today, he is categorized under 28 categories in Wikipedia, each of them with the same level of attachment. 1895 births | 1948 deaths | American League All-Stars | American League batting champions | American League ERA champions | American League home run champions | American League RBI champions | American people of German descent | American Roman Catholics | Babe Ruth | Baltimore Orioles (IL) players | Baseball players from Maryland | Boston Braves players | Boston Red Sox players | Brooklyn Dodgers coaches | Burials at Gate of Heaven Cemetery | Cancer deaths in New York | Deaths from esophageal cancer | Major League Baseball first base coaches | Major League Baseball left fielders | Major League Baseball pitchers | Major League Baseball players with retired numbers | Major League Baseball right fielders | National Baseball Hall of Fame inductees | New York Yankees players | Providence Grays (minor league) players | Sportspeople from Baltimore | Maryland | Vaudeville performers.

Now imagine how confused a machine would get when the distance of unweighted edges between nodes is used as a scoring mechanism for relevancy.

If I were to design an algorithm that uses weighted edges (on a scale of 1-5, with 5 being the highest), the same search would yield a much more obvious result.

data gravity1895 births [2]| 1948 deaths [2]| American League All-Stars [4]| American League batting champions [4]| American League ERA champions [4]| American League home run champions [4]| American League RBI champions [4]| American people of German descent [2]| American Roman Catholics [2]| Babe Ruth [5]| Baltimore Orioles (IL) players [4]| Baseball players from Maryland [3]| Boston Braves players [4]| Boston Red Sox players [5]| Brooklyn Dodgers coaches [4]| Burials at Gate of Heaven Cemetery [2]| Cancer deaths in New York [2]| Deaths from esophageal cancer [1]| Major League Baseball first base coaches [4]| Major League Baseball left fielders [3]| Major League Baseball pitchers [5]| Major League Baseball players with retired numbers [4]| Major League Baseball right fielders [3]| National Baseball Hall of Fame inductees [5]| New York Yankees players [5]| Providence Grays (minor league) players [3]| Sportspeople from Baltimore [1]| Maryland [1]| Vaudeville performers [1].

Now the machine starts to think more like a human. The above example forces us to ask ourselves the relevancy a.k.a. Value of the response. This is where I think Data Gravity’s becomes relevant.

You can contact me on twitter @bigdatabeat with your comments.

Share
Posted in Architects, Big Data, Cloud, Cloud Data Management, Data Aggregation, Data Archiving, Data Governance, General, Hadoop | Tagged , , , , , , | Leave a comment

How Citrix is Using Great Data to Build Fortune Teller-Like Marketing

Build Fortune Teller-Like MarketingCitrix: You may not realize you know them, but chances are pretty good that you do.  And chances are also good that we marketers can learn something about achieving fortune teller-like marketing from them!

Citrix is the company that brought you GoToMeeting and a whole host of other mobile workspace solutions that provide virtualization, networking and cloud services.  Their goal is to give their 100 million users in 260,000 organizations across the globe “new ways to work better with seamless and secure access to the apps, files and services they need on any device, wherever they go.”

Citrix LogoCitrix is a company that has been imagining and innovating for over 25 years, and over that time, has seen a complete transformation in their market – virtual solutions and cloud services didn’t even exist when they were founded. Now it’s the backbone of their business.  Their corporate video proudly states that the only constant in this world is change, and that they strive to embrace the “yet to be discovered.”

Having worked with them quite a bit over the past few years, we have seen first-hand how Citrix has demonstrated their ability to embrace change.

The Problem:

Back in 2011, it became clear to Citrix that they had a data problem, and that they would have to make some changes to stay ahead in this hyper competitive market.  Sales & Marketing had identified data as their #1 concern – their data was incomplete, inaccurate, and duplicated in their CRM system.  And with so many different applications in the organization, it was quite difficult to know which application or data source had the most accurate and up-to-date information.  They realized they needed a single source of the truth – one system of reference where all of their global data management practices could be centralized and consistent.

The Solution:

The marketing team realized that they needed to take control of the solution to their data concerns, as their success truly depended upon it.  They brought together their IT department and their systems integration partner, Cognizant to determine a course of action.  Together they forged an overall data governance strategy which would empower the marketing team to manage data centrally – to be responsible for their own success.

Citrix Marketing EnvironmentAs a key element of that data governance / management strategy, they determined that they needed a Master Data Management (MDM) solution to serve as their Single Trusted Source of Customer & Prospect Data.  They did a great deal of research into industry best practices and technology solutions, and decided to select Informatica as their MDM partner. As you can see, Citrix’s environment is not unlike most marketing organizations.  The difference is that they are now able to capture and distribute better customer and prospect data to and from these systems to achieve even better results.  They are leveraging internal data sources and systems like CRM (Salesforce) and marketing automation (Marketo).  Their systems live all over the enterprise, both on premises and in the cloud.  And they leverage analytical tools to analyze and dashboard their results.

The Results:

Citrix strategized and implemented their Single Trusted Source of Customer & Prospect solution in a phased approach throughout 2013 and 2014, and we believe that what they’ve been able to accomplish during that short period of time has been nothing short of phenomenal.  Here are the higlights:

Citrix Achieved Tremendous Results

  • Used Informatica MDM to provide clean, consistent and connected channel partner, customer and prospect data and the relationships between them for use in operational applications (SFDC, BI Reporting and Predictive Analytics)
  • Recognized 20% increase in lead-to-opportunity conversion rates
  • Realized 20% increase in marketing team’s operational efficiency
  • Achieved 50% increase in quality of data at the point of entry, and a 50% reduction in the rate of junk and duplicate data for prospects, existing accounts and contact
  • Delivered a better channel partner and customer experience by renewing all of a customers’ user licenses across product lines at one time and making it easy to identify whitespace opportunities to up-sell more user licenses

That is huge!  Can you imagine the impact on your own marketing organization of a 20% increase in lead-to-opportunity conversion?  Can you imagine the impact of spending 20% less time questioning and manually massaging data to get the information you need?  That’s game changing!

Because Citrix now has great data and great resulting insight, they have been able to take the next step and embark on new fortune teller-like marketing strategies.   As Citrix’s Dagmar Garcia discussed during a recent webinar, “We monitor implicit and explicit behavior of transactional leads and accounts, and then we leverage these insights and previous behaviors to offer net new offers and campaigns to our customers and prospects…  And it’s all based on the quality of data we have within our database.”

I encourage you to take a few minutes to listen to Dagmar discuss Citrix’s project on a recent webinar.  In the webinar, she dives deeper into their project, the project scope and timeline, and to what she means by “fortune telling abilities”.  Also, take a look at the customer story section of the Informatica.com website for the PDF case study.  And, if you’re in the mood to learn more, you can download a complimentary copy of the 2014 Gartner Magic Quadrant for MDM of Customer Data Solutions.

Hat’s off to you Citrix, and we look forward to working with you to continue to change the game even more in the coming months and years!

Share
Posted in CMO, Customers, Master Data Management | Tagged , , , , , , , , | Leave a comment