Tag Archives: data
80% of companies surveyed said that they offer superior customer service, but only 8% of their customers agreed with them. (Source: Bain & Company)
With numbers like that there is plenty of room to improve. But improve what?
Traditionally retailers have measured themselves against year over year increase in sales for like-stores, increased margins and lower operating costs. But, retailing has changed, customers can interact and transact with you across multiple touch points along their path to purchase and beyond. Poor performance at any one of these interaction points could lose you a customer and damage your brand.
A better measure is to calculate the customer experience across the omni-channel landscape. This will provide better insight into how you are attracting and retaining customers, and how well you are serving them. However, many retailers lack the technology and processes to deliver on a plan to improve the omni-channel customer experience.
Once you have decided to do something, what are you going to measure? Is it time spent on website versus sales? Speed to resolve problems in contact center versus number of repeat transactions from customer? Number of touch points before purchase? But what about the softer measures like how well your staff interact with customers in-store or social channels? How many “Pins” you have, or how do you assign value to them?
Organizations need to account for (CHURN, ATTRITION, LOYALTY and LIFETIME VALUE) to be able to evaluate their performance from a holistic view of their customer, not just in the confines of their own operational silo.
In an up and coming webinar Arkady Kleyner, from Intricity will break apart key components of the Omni-Channel Customer Experience calculation. Additionally, Arkady will identify the upstream components that keep this measure accurate and current.
Attend this webinar to learn:
- The foundational calculations of Omni-Channel Customer Experience
- Common customizations to fit different scenarios
- Upstream components to keep the calculation current and accurate
- Register here to receive a calendar invitation with the webinar details.
- Join us for a 1 hour webinar and Q/A session. The event will occur March 19th at 2:00PM EST.
Last week was Informatica’s first ever Data Mania event, held at the Contemporary Jewish Museum in San Francisco. We had an A-list lineup of speakers from leading cloud and data companies, such as Salesforce, Amazon Web Services (AWS), Tableau, Dun & Bradstreet, Marketo, AppDynamics, Birst, Adobe, and Qlik. The event and speakers covered a range of topics all related to data, including Big Data processing in the cloud, data-driven customer success, and cloud analytics.
While these companies are giants today in the world of cloud and have created their own unique ecosystems, we also wanted to take a peek at and hear from the leaders of tomorrow. Before startups can become market leaders in their own realm, they face the challenge of ramping up a stellar roster of customers so that they can get to subsequent rounds of venture funding. But what gets in their way are the numerous data integration challenges of onboarding customer data onto their software platform. When these challenges remain unaddressed, R&D resources are spent on professional services instead of building value-differentiating IP. Bugs also continue to mount, and technical debt increases.
Enter the Informatica Cloud Connector SDK. Built entirely in Java and able to browse through any cloud application’s API, the Cloud Connector SDK parses the metadata behind each data object and presents it in the context of what a business user should see. We had four startups build a native connector to their application in less than two weeks: BigML, Databricks, FollowAnalytics, and ThoughtSpot. Let’s take a look at each one of them.
With predictive analytics becoming a growing imperative, machine-learning algorithms that can have a higher probability of prediction are also becoming increasingly important. BigML provides an intuitive yet powerful machine-learning platform for actionable and consumable predictive analytics. Watch their demo on how they used Informatica Cloud’s Connector SDK to help them better predict customer churn.
Can’t play the video? Click here, http://youtu.be/lop7m9IH2aw
Databricks was founded out of the UC Berkeley AMPLab by the creators of Apache Spark. Databricks Cloud is a hosted end-to-end data platform powered by Spark. It enables organizations to unlock the value of their data, seamlessly transitioning from data ingest through exploration and production. Watch their demo that showcases how the Informatica Cloud connector for Databricks Cloud was used to analyze lead contact rates in Salesforce, and also performing machine learning on a dataset built using either Scala or Python.
Can’t play the video? Click here, http://youtu.be/607ugvhzVnY
With mobile usage growing by leaps and bounds, the area of customer engagement on a mobile app has become a fertile area for marketers. Marketers are charged with acquiring new customers, increasing customer loyalty and driving new revenue streams. But without the technological infrastructure to back them up, their efforts are in vain. FollowAnalytics is a mobile analytics and marketing automation platform for the enterprise that helps companies better understand audience engagement on their mobile apps. Watch this demo where FollowAnalytics first builds a completely native connector to its mobile analytics platform using the Informatica Cloud Connector SDK and then connects it to Microsoft Dynamics CRM Online using Informatica Cloud’s prebuilt connector for it. Then, see FollowAnalytics go one step further by performing even deeper analytics on their engagement data using Informatica Cloud’s prebuilt connector for Salesforce Wave Analytics Cloud.
Can’t play the video? Click here, http://youtu.be/E568vxZ2LAg
Analytics has taken center stage this year due to the rise in cloud applications, but most of the existing BI tools out there still stick to the old way of doing BI. ThoughtSpot brings a consumer-like simplicity to the world of BI by allowing users to search for the information they’re looking for just as if they were using a search engine like Google. Watch this demo where ThoughtSpot uses Informatica Cloud’s vast library of over 100 native connectors to move data into the ThoughtSpot appliance.
Can’t play the video? Click here, http://youtu.be/6gJD6hRD9h4
Who remembers their first game of Pong? Celebrating more than 40 years of innovation, gaming is no longer limited to monochromatic screens and dedicated, proprietary platforms. The PC gaming industry is expected to exceed $35bn by 2018. Phone and handheld games is estimated at $34bn in 5 years and quickly closing the gap. According to EEDAR, 2014 recorded more than 141 million mobile gamers just in North America, generating $4.6B in revenue for mobile game vendors.
This growth has spawned a growing list of conferences specifically targeting gamers, game developers, the gaming industry and more recently gaming analytics! This past weekend in Boston, for example, was PAX East where people of all ages and walks of life played games on consoles, PC, handhelds, and good old fashioned board games. With my own children in attendance, the debate of commercial games versus indie favorites, such as Minecraft , dominates the dinner table.
Online games are where people congregate online, collaborate, and generate petabytes of data daily. With the added bonus of geospatial data from smart phones, the opportunity for more advanced analytics. Some of the basic metrics that determine whether a game is successful, according to Ninja Metrics, include:
- New Users, Daily Active Users, Retention
- Revenue per user
- Session length and number of sessions per user
Additionally, they provide predictive analytics, customer lifetime value, and cohort analysis. If this is your gig, there’s a conference for that as well – the Gaming Analytics Summit !
At the Game Developers Conference recently held in San Francisco, the focus of this event has shifted over the years from computer games to new gaming platforms that need to incorporate mobile, smartphone, and online components. In order to produce a successful game, it requires the following:
- Needs to be able to connect to a variety of devices and platforms
- Needs to use data to drive decisions and improve user experience
- Needs to ensure privacy laws are adhered to.
Developers are able to quickly access online gaming data and tweak or change their sprites’ attributes dynamically to maximize player experience.
When you look at what is happening in the gaming industry, you can start to see why colleges and universities like my own alma mater, WPI, now offers a computer science degree in Interactive Media and Game Design degree . The IMGD curriculum includes heavy coursework in data science, game theory, artificial intelligence and story boarding. When I asked a WPI IMGD student about what they are working on, they are mapping out decision trees that dictate what adversary to pop up based on the player’s history (sounds a lot like what we do in digital marketing…).
As we start to look at the Millennial Generation entering into the workforce, maybe we should look at our own recruiting efforts and consider game designers. They are masters in analytics and creativity with an appreciation for the importance of great data. Combining the magic and the math makes a great gaming experience. Who wouldn’t want that for their customers?
With a total B2C e-commerce turnover of $567.3bn in 2013, Asia-Pacific was the strongest e-commerce region in the world in 2013, as it surpassed Europe ($482.3bn) and North America ($452.4bn). Online sales in Asia-Pacific expected to have reached $799.2 billion in 2014, due to latest report from the Ecommerce Foundation.
Revenue: China, followed by Japan and Australia
As a matter of fact, China was the second-largest e-commerce market in the world, only behind the US ($419.0 billion), and for 2014 it is estimated that China even surpassed the US ($537.0 billion vs. $456.0 billion). In terms of B2C e-commerce turnover, Japan ($136.7 billion) ranked second, followed by Australia ($35.7 billion), South Korea ($20.2 billion) and India ($10.7 billion).
On average, Asian-Pacific e-shoppers spent $1,268 online in 2013
Ecommerce Europe’s research reveals that 235.7 million consumers in Asia-Pacific purchased goods and services online in 2013. On average, APAC online consumers each spent $1,268 online in 2013. This is slightly lower than the global average of $1,304. At $2,167, Australian e-shopper were the biggest spenders online, followed by the Japanese ($1,808) and China ($1,087).
Mobile: Japan and Australia lead the pack
In the frequency of mobile purchasing Japan shows the highest adoption, followed by Japan. An interesting fact is that 50% of transactions are done at home, 20% at work and 10% on the go.
Valentine’s Day is such a strange holiday. It always seems to bring up more questions than answers. And the internet always seems to have a quiz to find out the answer! There’s the “Does he have a crush on you too – 10 simple ways to find out” quiz. There’s the “What special gift should I get her this Valentine’s Day?” quiz. And the ever popular “Why am I still single on Valentine’s Day?” quiz.
Well Marketers, it’s your lucky Valentine’s Day! We have a quiz for you too! It’s about your relationship with data. Where do you stand? Are you ready to take the next step?
Question 1: Do you connect – I mean, really connect – with your data?
□ (A) Not really. We just can’t seem to get it together and really connect.
□ (B) Sometimes. We connect on some levels, but there are big gaps.
□ (C) Most of the time. We usually connect, but we miss out on some things.
□ (D) We are a perfect match! We connect about everything, no matter where, no matter when.
Translation: Data ready marketers have access to the best possible data, no matter what form it is in, no matter what system it is in. They are able to make decisions based everything the entire organization “knows” about their customer/partner/product – with a complete 360 degree view. And they are also able to connect to and integrate with data outside the bounds of their organization to achieve the sought-after 720 degree view. They can integrate and react to social media comments, trends, and feedback – in real time – and to match it with an existing record whenever possible. And they can quickly and easily bring together any third party data sources they may need.
Question 2: How good looking & clean is you data?
□ (A) Yikes, not very. But it’s what’s on the inside that counts right?
□ (B) It’s ok. We’ve both let ourselves go a bit.
□ (C) It’s pretty cute. Not supermodel hot, but definitely girl or boy next door cute.
□ (D) My data is HOT! It’s perfect in every way!
Translation: Marketers need data that is reliable and clean. According to a recent Experian study, American companies believe that 25% of their data is inaccurate, the rest of the world isn’t much more confident. 90% of respondents said they suffer from common data errors, and 78% have problems with the quality of the data they gather from disparate channels. Making marketing decisions based upon data that is inaccurate leads to poor decisions. And what’s worse, many marketers have no idea how good or bad their data is, so they have no idea what impact it is having on their marketing programs and analysis. The data ready marketer understands this and has a top tier data quality solution in place to make sure their data is in the best shape possible.
Question 3: Do you feel safe when you’re with your data?
□ (A) No, my data is pretty scary. 911 is on speed dial.
□ (B) I’m not sure actually. I think so?
□ (C) My date is mostly safe, but it’s got a little “bad boy” or “bad girl” streak.
□ (D) I protect my data, and it protects me back. We keep each other safe and secure.
Translation: Marketers need to be able to trust the quality of their data, but they also need to trust the security of their data. Is it protected or is it susceptible to theft and nefarious attacks like the ones that have been all over the news lately? Nothing keeps a CMO and their PR team up at night like worrying they are going to be the next brand on the cover of a magazine for losing millions of personal customer records. But beyond a high profile data breach, marketers need to be concerned over data privacy. Are you treating customer data in the way that is expected and demanded? Are you using protected data in your marketing practices that you really shouldn’t be? Are you marketing to people on excluded lists
Question 4: Is your data adventurous and well-traveled, or is it more of a “home-body”?
□ (A) My data is all over the place and it’s impossible to find.
□ (B) My data is all in one place. I know we’re missing out on fun and exciting options, but it’s just easier this way.
□ (C) My data is in a few places and I keep fairly good tabs on it. We can find each other when we need to, but it takes some effort.
□ (D) My data is everywhere, but I have complete faith that I can get ahold of any source I might need, when and where I need it.
Translation: Marketing data is everywhere. Your marketing data warehouse, your CRM system, your marketing automation system. It’s throughout your organization in finance, customer support, and sale systems. It’s in third party systems like social media and data aggregators. That means it’s in the cloud, it’s on premise, and everywhere in between. Marketers need to be able to get to and integrate data no matter where it “lives”.
Question 5: Does your data take forever to get ready when it’s time to go do so something together?
□ (A) It takes forever to prepare my data for each new outing. It’s definitely not “ready to go”.
□ (B) My data takes it’s time to get ready, but it’s worth the wait… usually!
□ (C) My data is fairly quick to get ready, but it does take a little time and effort.
□ (D) My data is always ready to go, whenever we need to go somewhere or do something.
Translation: One of the reasons many marketers end up in marketing is because it is fast paced and every day is different. Nothing is the same from day-to-day, so you need to be ready to act at a moment’s notice, and change course on a dime. Data ready marketers have a foundation of great data that they can point at any given problem, at any given time, without a lot of work to prepare it. If it is taking you weeks or even days to pull data together to analyze something new or test out a new hunch, it’s too late – your competitors have already done it!
Question 6: Can you believe the stories your data is telling you?
□ (A) My data is wrong a lot. It stretches the truth a lot, and I cannot rely on it.
□ (B) I really don’t know. I question these stories – dare I say excused – but haven’t been able to prove it one way or the other.
□ (C) I believe what my data says most of the time. It rarely lets me down.
□ (D) My data is very trustworthy. I believe it implicitly because we’ve earned each other’s trust.
Translation: If your data is dirty, inaccurate, and/or incomplete, it is essentially “lying” to you. And if you cannot get to all of the data sources you need, your data is telling you “white lies”! All of the work you’re putting into analysis and optimization is based on questionable data, and is giving you questionable results. Data ready marketers understand this and ensure their data is clean, safe, and connected at all times.
Question 7: Does your data help you around the house with your daily chores?
□ (A) My data just sits around on the couch watching TV.
□ (B) When I nag my data will help out occasionally.
□ (C) My data is pretty good about helping out. It doesn’t take imitative, but it helps out whenever I ask.
□ (D) My data is amazing. It helps out whenever it can, however it can, even without being asked.
Translation: Your marketing data can do so much. It should enable you be “customer ready” – helping you to understand everything there is to know about your customers so you can design amazing personalized campaigns that speak directly to them. It should enable you to be “decision ready” – powering your analytics capabilities with great data so you can make great decisions and optimize your processes. But it should also enable you to be “showcase ready” – giving you the proof points to demonstrate marketing’s actual impact on the bottom line.
Now for the fun part… It’s time to rate your data relationship status
If you answered mostly (A): You have a rocky relationship with your data. You may need some data counseling!
If you answered mostly (B): It’s time to decide if you want this data relationship to work. There’s hope, but you’ve got some work to do.
If you answered mostly (C): You and your data are at the beginning of a beautiful love affair. Keep working at it because you’re getting close!
If you answered mostly (D): Congratulations, you have a strong data marriage that is based on clean, safe, and connected data. You are making great business decisions because you are a data ready marketer!
Do You Love Your Data?
No matter what your data relationship status, we’d love to hear from you. Please take our survey about your use of data and technology. The results are coming out soon so don’t miss your chance to be a part. https://www.surveymonkey.com/s/DataMktg
Also, follow me on twitter – The Data Ready Marketer – for some of the latest & greatest news and insights on the world of data ready marketing. And stay tuned because we have several new Data Ready Marketing pieces coming out soon – InfoGraphics, eBooks, SlideShares, and more!
First off, let me get one thing off my chest. If you don’t pay close attention to your data, throughout the application consolidation or migration process, you are almost guaranteed delays and budget overruns. Data consolidation and migration is at least 30%-40% of the application go-live effort. We have learned this by helping customers deliver over 1500 projects of this type. What’s worse, if you are not super meticulous about your data, you can be assured to encounter unhappy business stakeholders at the end of this treacherous journey. The users of your new application expect all their business-critical data to be there at the end of the road. All the bells and whistles in your new application will matter naught if the data falls apart. Imagine if you will, students’ transcripts gone missing, or your frequent-flyer balance a 100,000 miles short! Need I say more? Now, you may already be guessing where I am going with this. That’s right, we are talking about the myths and realities related to your data! Let’s explore a few of these.
Myth #1: All my data is there.
Reality #1: It may be there… But can you get it? if you want to find, access and move out all the data from your legacy systems, you must have a good set of connectivity tools to easily and automatically find, access and extract the data from your source systems. You don’t want to hand-code this for each source. Ouch!
Myth #2: I can just move my data from point A to point B.
Reality #2: You can try that approach if you want. However you might not be happy with the results. Reality is that there can be significant gaps and format mismatches between the data in your legacy system and the data required by your new application. Additionally you will likely need to assemble data from disparate systems. You need sophisticated tools to profile, assemble and transform your legacy data so that it is purpose-fit for your new application.
Myth #3: All my data is clean.
Reality #3: It’s not. And here is a tip: better profile, scrub and cleanse your data before you migrate it. You don’t want to put a shiny new application on top of questionable data . In other words let’s get a fresh start on the data in your new application!
Myth #4: All my data will move over as expected
Reality #4: It will not. Any time you move and transform large sets of data, there is room for logical or operational errors and surprises. The best way to avoid this is to automatically validate that your data has moved over as intended.
Myth #5: It’s a one-time effort.
Reality #5: ‘Load and explode’ is formula for disaster. Our proven methodology recommends you first prototype your migration path and identify a small subset of the data to move over. Then test it, tweak your model, try it again and gradually expand. More importantly, your application architecture should not be a one-time effort. It is work in progress and really an ongoing journey. Regardless of where you are on this journey, we recommend paying close attention to managing your application’s data foundation.
As you can see, there is a multitude of data issues that can plague an application consolidation or migration project and lead to its doom. These potential challenges are not always recognized and understood early on. This perception gap is a root-cause of project failure. This is why we are excited to host Philip Russom, of TDWI, in our upcoming webinar to discuss data management best practices and methodologies for application consolidation and migration. If you are undertaking any IT modernization or rationalization project, such as consolidating applications or migrating legacy applications to the cloud or to ‘on-prem’ application, such as SAP, this webinar is a must-see.
So what’s your reality going to be like? Will your project run like a dream or will it escalate into a scary nightmare? Here’s hoping for the former. And also hoping you can join us for this upcoming webinar to learn more:
Webinar with TDWI:
Successful Application Consolidation & Migration: Data Management Best Practices.
Date: Tuesday March 10, 10 am PT / 1 pm ET
Don’t miss out, Register Today!
1) Gartner report titled “Best Practices Mitigate Data Migration Risks and Challenges” published on December 9, 2014
2) Harvard Business Review: ‘Why your IT project may be riskier than you think’.
Healthcare and data have the makings of an epic love affair, but like most relationships, it’s not all roses. Data is playing a powerful role in finding a cure for cancer, informing cost reduction, targeting preventative treatments and engaging healthcare consumers in their own care. The downside? Data is needy. It requires investments in connectedness, cleanliness and safety to maximize its potential.
- Data is ubiquitous…connect it.
4400 times the amount of information held at the Library of Congress – that’s how much data Kaiser Permanente alone has generated from its electronic medical record. Kaiser successfully makes every piece of information about each patient available to clinicians, including patient health history, diagnosis by other providers, lab results and prescriptions. As a result, Kaiser has seen marked improvements in outcomes: 26% reduction in office visits per member and a 57% reduction in medication errors.
Ongoing value, however, requires continuous investment in data. Investments in data integration and data quality ensure that information from the EMR is integrated with other sources (think claims, social, billing, supply chain) so that clinicians and decision makers have access in the format they need. Without this, self-service intelligence can be inhibited by duplicate data, poor quality data or application silos.
- Data is popular…ensure it is clean.
Healthcare leaders can finally rely on electronic data to make strategic decisions. A CHRISTUS Health anecdote you might relate to – In a weekly meeting each executive reviews their strategic dashboard; these dashboards drive strategic decision making about CPOE adoption (computerized physician order entry), emergency room wait times and price per procedure. Powered by enterprise information management, these dashboards paint a reliable and consistent view across the system’s 60 hospitals. Previous to the implementation of an enterprise data platform, each executive was reliant on their own set of data.
In the pre-data investment era, seemingly common data elements from different sources did not mean the same thing. For example, “Admit Date” in one report reflected the emergency department admission date whereas “Admit Date” in another report referred to the inpatient admission date.
- Sharing data is necessary…make it safe.
To cure cancer, reduce costs and engage patients, care providers need access to data and not just the data they generate; it has to be shared for coordination of care through transitions of care and across settings, i.e. home care, long term care and behavioral health. Fortunately, Consumers and Clinicians agree on this, PWC reports that 56% of consumers and 30% of physicians are comfortable with data sharing for care coordination. Further progress is demonstrated by healthcare organizations willingly adopting cloud based applications –as of 2013, 40% of healthcare organizations were already storing protected health information (PHI) in the cloud.
Increased data access carries risk, leaving health data exposed, however. The threat of data breach or hacking is multiplied by the presence (in many cases necessary) of PHI on employee laptops and the fact that providers are provided increased access to PHI. Ponemon Institute, a security firm estimates that data breaches cost the industry $5.6 billion each year. Investments in data-centric security are necessary to assuage fear, protect personal health data and make secure data sharing a reality.
Early improvements in patient outcomes indicate that the relationship between data and healthcare is a valuable investment. The International Institute of Analytics supports this, reporting that although analytics and data maturity across healthcare lags other industries, the opportunity to positively impact clinical and operational outcomes is significant.
I absolutely love football, so when the Super Bowl came to our hometown Phoenix, it was my paradise! Football on every.single.channel. Current and former NFL players were everywhere – I ate breakfast next to Howie Long and pumped gas next to Tony Romo. ESPN & NFL Network analysts were commentating from blocks away. Even our downtown was transformed into a giant celebration of football.
People often talk about the “Super Bowl of Marketing”, referring to the advertising extravaganza and the millions of dollars spent on hilarious (and sometimes not) commercials. But spending so much time immersed in the Super Bowl festivities got me thinking about one of my other fascinations… data! It was the Super Bowl of data too!
On Sunday morning, before the big game (of the Superb Owl as Stephen Colbert would say), I got to witness first-hand the data-driven marketing potential at the NFL Experience in Downtown Phoenix. The NFL did an amazing job putting on this event – it was truly exceptional with something for everyone.
Once we purchased our tickets, we decided to take the kids to do some Play 60 activities. Before they could participate, we were shuttled to a bank of computers to “get a wristband” and to sign a waiver. I’m sure the lawyers made sure that everyone participating in anything physical wouldn’t sue the NFL or the sponsors if they got a hangnail or twisted ankle. But the data ready marketer in me realized that these wristbands were much more than a liability waiver. They were also a data treasure map!
To get the wristband, you had to provide the NFL (and their sponsors) with your demographic & contact information, your favorite teams, your children’s names and ages, and give them permission to contact you. You also received an emailed QR code that you could use to unlock certain activities throughout the Experience.
As we moved around the Experience, they scanned our wristband or QR code at each activity. So now the NFL knows that we have 3 children and their names and ages. They now know our two youngest love to play football (because they participated in a flag football Play 60 clinic). They now know that we are huge Denver Bronco fans and purchased a few new jerseys of our favorite players at their shop (where they again scanned our QR code for a small discount). They now know we use AT&T wireless and our phone numbers. They know that our boys really want to improve the speed of their throws because they went through the Peyton Manning Nationwide arm speed and throw accuracy activity five different times… and that nobody ever got over 35MPH. And they also now know that none of us will ever become great kickers because we all seriously shanked our field goal tries! And we happily gave them all our data because they provided us with a meaningful service – a really fun, family experience.
Even better for them, for the first time, I actually logged into the NFL Mobile app and turned on location permissions so that I could get real time alerts to what was going on in the area. Since I use the app all time, that’s a lot of future data that I’ve now given them.
GMC sponsored the Experience and had a huge space in the main area to show off their new car lineup, and they definitely took full advantage of the data provided. They held a car giveaway that required you to scan your NFL QR code to start the process, and then answer several questions about your vehicle likes and future purchase plans. You then had to go around to your favorite three vehicles and answer questions about their amazing features (D all of the above was the answer of course!). After you visited your favorite vehicles, you took your QR code back to see if you won. My 13 year old was hopeful that we were going to win him a new Denali, but sadly, we did not! And sadly for him, had we been fortunate enough to win, he wouldn’t be driving it anyway!
I waited a few days to write this blog because I was hopeful that I would receive some sort of personalized experience from the NFL that would blow my socks off. I’m not sure what technology the NFL & GMC marketing teams use, and if they are data ready. If they were though, I would have hoped they already would have engaged me with a personalized experience based on the data I have given them.
GMC has sent me a few emails, one with a photo that was taken green-screen style of my kids. And yes, I’ve downloaded it and have a photo of them with the GMC logo loud and proud on my desktop.
But other than that, nothing very exciting as of yet, and definitely nothing innovative or engaging yet. But I truly hope that the NFL & GMC use this data to provide me with a better, personalized experience. Isn’t that why our consumers freely offer their information? To receive something of value back.
Here are a few ideas for you NFL:
- Special discounts on Denver Broncos apparel
- Alert from the NFL ticket exchange the next time the Broncos play the Cardinals in Arizona, and 5 tickets become available
- Information about how to sign up for NFL kids clinics
- Sorry GMC, I’m not quite sure what to suggest because we just bought a new Toyota a few months ago (but you know that I’m not in the market for a new car right now because I gave you that information too).
Thank you for a really wonderful experience NFL & GMC! In this age of data-driven personalization, I am anxiously awaiting your next move! Now, are you ready for some football (sorry couldn’t resist!)? But in all seriousness, are you ready to reach your data-driven marketing potential.
Will this be the beginning of the Super Bowl of Data Ready Marketing! As an NFL fan and consumer, I know I’m ready!
Are you ready? Please tell us in our survey about data ready marketing. The results are coming out soon so don’t miss your chance to be a part. You can find the link here.
Also, follow me on twitter – The Data Ready Marketer (@StephanieABest) for some of the latest & greatest news and insights on the world of data ready marketing.
And stay tuned because we have several new Data Ready Marketing pieces coming out soon – InfoGraphics, eBooks, SlideShares, and more!
A month ago, I shared that Frank Friedman believes CFOs are “the logical choice to own analytics and put them to work to serve the organization’s needs”. Even though many CFOs are increasingly taking on what could be considered an internal CEO or COO role, many readers protested my post which focused on reviewing Frank Friedman’s argument. At the same time, CIOs have been very clear with me that they do not want to personally become their company’s data steward. So the question becomes should companies be creating a CDO or CAO role to lead this important function? And if yes, how common are these two roles anyway?
Regardless of eventual ownership, extracting value out of data is becoming a critical business capability. It is clear that data scientists should not be shoe horned into the traditional business analyst role. Data Scientists have the unique ability to derive mathematical models “for the extraction of knowledge from data “(Data Science for Business, Foster Provost, 2013, pg 2). For this reason, Thomas Davenport claims that data scientists need to be able to network across an entire business and be able to work at the intersection of business goals, constraints, processes, available data and analytical possibilities. Given this, many organizations today are starting to experiment with the notion of having either a chief data officers (CDOs) or chief analytics officers (CAOs). The open questions is should an enterprise have a CDO or a CAO or both? And as important in the end, it is important to determine where should each of these roles report in the organization?
Data policy versus business questions
In my opinion, it is the critical to first look into the substance of each role before making a decision with regards to the above question. The CDO should be about ensuring that information is properly secured, stored, transmitted or destroyed. This includes, according to COBIT 5, that there are effective security and controls over information systems. To do this, procedures need to be defined and implemented to ensure the integrity and consistency of information stored in databases, data warehouses, and data archives. According to COBIT 5, data governance requires the following four elements:
- Clear information ownership
- Timely, correct information
- Clear enterprise architecture and efficiency
- Compliance and security
To me, these four elements should be the essence of the CDO role. Having said this, the CAO is related but very different in terms of the nature of the role and the business skills require. The CRISP model points out just how different the two roles are. According to CRISP, the CAO role should be focused upon business understanding, data understanding, data preparation, data modeling, and data evaluation. As such the CAO is focused upon using data to solve business problems while the CDO is about protecting data as a business critical asset. I was living in in Silicon Valley during the “Internet Bust”. I remember seeing very few job descriptions and few job descriptions that existed said that they wanted a developer who could also act as a product manager and do some marketing as a part time activity. This of course made no sense. I feel the same way about the idea of combining the CDO and CAO. One is about compliance and protecting data and the other is about solving business problems with data. Peanut butter and chocolate may work in a Reese’s cup but it will not work here—the orientations are too different.
So which business leader should own the CDO and CAO?
Clearly, having two more C’s in the C-Suite creates a more crowded list of corporate officers. Some have even said that this will extended what is called senior executive bloat. And what of course how do these new roles work with and impact the CIO? The answer depends on organization’s culture, of course. However, where there isn’t an executive staff office, I suggest that these roles go to different places. Clearly, many companies already have their CIO function already reporting to finance. Where this is the case, it is important determine whether a COO function is in place. The COO clearly could own the CDO and CAO functions because they have a significant role in improving process processes and capabilities. Where there isn’t a COO function and the CIO reports to the CEO, I think you could have the CDO report to the CIO even though CIOs say they do not want to be a data steward. This could be a third function in parallel the VP of Ops and VP of Apps. And in this case, I would put the CAO report to one of the following: the CFO, Strategy, or IT. Again this all depends on current organizational structure and corporate culture. Regardless of where it reports, the important thing is to focus the CAO on an enterprise analytics capability.
Author Twitter: @MylesSuer
The verdict is in. Data is now broadly perceived as a source of competitive advantage. We all feel the heat to deliver good data. It is no wonder organizations view Analytics initiatives as highly strategic. But the big question is, can you really trust your data? Or are you just creating pretty visualizations on top of bad data?
We also know there is a shift towards self-service Analytics. But did you know that according to Gartner, “through 2016, less than 10% of self-service BI initiatives will be governed sufficiently to prevent inconsistencies that adversely affect the business”?1 This means that you may actually show up at your next big meeting and have data that contradicts your colleague’s data. Perhaps you are not working off of the same version of the truth. Maybe you have siloed data on different systems and they are not working in concert? Or is your definition of ‘revenue’ or ‘leads’ different from that of your colleague’s?
So are we taking our data for granted? Are we just assuming that it’s all available, clean, complete, integrated and consistent? As we work with organizations to support their Analytics journey, we often find that the harsh realities of data are quite different from perceptions. Let’s further investigate this perception gap.
For one, people may assume they can easily access all data. In reality, if data connectivity is not managed effectively, we often need to beg borrow and steal to get the right data from the right person. If we are lucky. In less fortunate scenarios, we may need to settle for partial data or a cheap substitute for the data we really wanted. And you know what they say, the only thing worse than no data is bad data. Right?
Another common misperception is: “Our data is clean. We have no data quality issues”. Wrong again. When we work with organizations to profile their data, they are often quite surprised to learn that their data is full of errors and gaps. One company recently discovered within one minute of starting their data profiling exercise, that millions of their customer records contained the company’s own address instead of the customers’ addresses… Oops.
Another myth is that all data is integrated. In reality, your data may reside in multiple locations: in the cloud, on premise, in Hadoop and on mainframe and anything in between. Integrating data from all these disparate and heterogeneous data sources is not a trivial task, unless you have the right tools.
And here is one more consideration to mull over. Do you find yourself manually hunting down and combining data to reproduce the same ad hoc report over and over again? Perhaps you often find yourself doing this in the wee hours of the night? Why reinvent the wheel? It would be more productive to automate the process of data ingestion and integration for reusable and shareable reports and Analytics.
Simply put, you need great data for great Analytics. We are excited to host Philip Russom of TDWI in a webinar to discuss how data management best practices can enable successful Analytics initiatives.
And how about you? Can you trust your data? Please join us for this webinar to learn more about building a trust-relationship with your data!
- Gartner Report, ‘Predicts 2015: Power Shift in Business Intelligence and Analytics Will Fuel Disruption’; Authors: Josh Parenteau, Neil Chandler, Rita L. Sallam, Douglas Laney, Alan D. Duncan; Nov 21 2014