Category Archives: Data Quality

What’s Driving Core Banking Modernization?

Renew

What’s Driving Core Banking Modernization

When’s the last time you visited your local branch bank and spoke to a human being? How about talking to your banker over the phone?  Can’t remember?  Well you’re not alone and don’t worry, it’s not a bad thing. The days of operating physical branches with expensive workers to greet and service customers  are being replaced with more modern and customer friendly mobile banking applications that allow consumers to deposit checks from the phone, apply for a mortgage and sign closing documents electronically, to eliminating the need to go to an ATM and get physical cash by using mobile payment solutions like Apple Pay.  In fact, a new report titled ‘Bricks + Clicks: Building the Digital Branch,’ from Jeanne Capachin and Jim Marous takes an in-depth look at how banks and credit unions are changing their branch and customer channel strategies to meet the demand of today’s digital banking customer.

Why am I talking about this? These market trends are dominating the CEO and CIO agenda in today’s banking industry. I just returned from the 2015 IDC Asian Financial Congress event in Singapore where the digital journey for the next generation bank was a major agenda item. According the IDC Financial Insights, global banks will invest $31.5B USD in core banking modernization to enable these services, improve operational efficiency, and position these banks to better compete on technology and convenience across markets. Core banking modernization initiatives are complex, costly, and fraught with risks. Let’s take a closer look. (more…)

Share
Posted in Application Retirement, Architects, Banking & Capital Markets, Data Migration, Data Privacy, Data Quality, Vertical | Tagged , , | Leave a comment

Startup Winners of the Informatica Data Mania Connect-a-Thon

Last week was Informatica’s first ever Data Mania event, held at the Contemporary Jewish Museum in San Francisco. We had an A-list lineup of speakers from leading cloud and data companies, such as Salesforce, Amazon Web Services (AWS), Tableau, Dun & Bradstreet, Marketo, AppDynamics, Birst, Adobe, and Qlik. The event and speakers covered a range of topics all related to data, including Big Data processing in the cloud, data-driven customer success, and cloud analytics.

While these companies are giants today in the world of cloud and have created their own unique ecosystems, we also wanted to take a peek at and hear from the leaders of tomorrow. Before startups can become market leaders in their own realm, they face the challenge of ramping up a stellar roster of customers so that they can get to subsequent rounds of venture funding. But what gets in their way are the numerous data integration challenges of onboarding customer data onto their software platform. When these challenges remain unaddressed, R&D resources are spent on professional services instead of building value-differentiating IP.  Bugs also continue to mount, and technical debt increases.

Enter the Informatica Cloud Connector SDK. Built entirely in Java and able to browse through any cloud application’s API, the Cloud Connector SDK parses the metadata behind each data object and presents it in the context of what a business user should see. We had four startups build a native connector to their application in less than two weeks: BigML, Databricks, FollowAnalytics, and ThoughtSpot. Let’s take a look at each one of them.

BigML

With predictive analytics becoming a growing imperative, machine-learning algorithms that can have a higher probability of prediction are also becoming increasingly important.  BigML provides an intuitive yet powerful machine-learning platform for actionable and consumable predictive analytics. Watch their demo on how they used Informatica Cloud’s Connector SDK to help them better predict customer churn.

Can’t play the video? Click here, http://youtu.be/lop7m9IH2aw

Databricks

Databricks was founded out of the UC Berkeley AMPLab by the creators of Apache Spark. Databricks Cloud is a hosted end-to-end data platform powered by Spark. It enables organizations to unlock the value of their data, seamlessly transitioning from data ingest through exploration and production. Watch their demo that showcases how the Informatica Cloud connector for Databricks Cloud was used to analyze lead contact rates in Salesforce, and also performing machine learning on a dataset built using either Scala or Python.

Can’t play the video? Click here, http://youtu.be/607ugvhzVnY

FollowAnalytics

With mobile usage growing by leaps and bounds, the area of customer engagement on a mobile app has become a fertile area for marketers. Marketers are charged with acquiring new customers, increasing customer loyalty and driving new revenue streams. But without the technological infrastructure to back them up, their efforts are in vain. FollowAnalytics is a mobile analytics and marketing automation platform for the enterprise that helps companies better understand audience engagement on their mobile apps. Watch this demo where FollowAnalytics first builds a completely native connector to its mobile analytics platform using the Informatica Cloud Connector SDK and then connects it to Microsoft Dynamics CRM Online using Informatica Cloud’s prebuilt connector for it. Then, see FollowAnalytics go one step further by performing even deeper analytics on their engagement data using Informatica Cloud’s prebuilt connector for Salesforce Wave Analytics Cloud.

Can’t play the video? Click here, http://youtu.be/E568vxZ2LAg

ThoughtSpot

Analytics has taken center stage this year due to the rise in cloud applications, but most of the existing BI tools out there still stick to the old way of doing BI. ThoughtSpot brings a consumer-like simplicity to the world of BI by allowing users to search for the information they’re looking for just as if they were using a search engine like Google. Watch this demo where ThoughtSpot uses Informatica Cloud’s vast library of over 100 native connectors to move data into the ThoughtSpot appliance.

Can’t play the video? Click here, http://youtu.be/6gJD6hRD9h4

Share
Posted in B2B, Business Impact / Benefits, Cloud, Data Integration, Data Integration Platform, Data Privacy, Data Quality, Data Services, Data Transformation | Tagged , , , , , | Leave a comment

Informatica and Hortonworks Talk Analytics in Insurance

analytics

Informatica and Hortonworks Talk Analytics in Insurance

On March 25th, Josh Lee, Global Director for Insurance Marketing at Informatica and Cindy Maike, General Manager, Insurance at Hortonworks, will be joining the Insurance Journal in a webinar on “How to Become an Analytics Ready Insurer”.

Register for the Webinar on March 25th at 10am Pacific/ 1pm Eastern

Josh and Cindy exchange perspectives on what “analytics ready” really means for insurers, and today we are sharing some of our views (join the webinar to learn more). Josh and Cindy offer perspectives on the five questions posed here. Please join Insurance Journal, Informatica and Hortonworks on March 25th for more on this exciting topic.

See the Hortonworks site for a second posting of this blog and more details on exciting innovations in Big Data.

  1. What makes a big data environment attractive to an insurer?

CM: Many insurance companies are using new types of data to create innovative products that better meet their customers’ risk needs. For example, we are seeing insurance for “shared vehicles” and new products for prevention services. Much of this innovation is made possible by the rapid growth in sensor and machine data, which the industry incorporates into predictive analytics for risk assessment and claims management.

Customers who buy personal lines of insurance also expect the same type of personalized service and offers they receive from retailers and telecommunication companies. They expect carriers to have a single view of their business that permeates customer experience, claims handling, pricing and product development. Big data in Hadoop makes that single view possible.

JL: Let’s face it, insurance is all about analytics. Better analytics leads to better pricing, reduced risk and better customer service. But here’s the issue. Existing data sources are costly in storing vast amounts of data and inflexible to adapt to changing needs of innovative analytics. Imagine kicking off a simulation or modeling routine one evening only to return in the morning and find it incomplete or lacking data that requires a special request of IT.

This is where big data environments are helping insurers. Larger, more flexible data sets allowing longer series of analytics to be run, generating better results. And imagine doing all that at a fraction of the cost and time of traditional data structures. Oh, and heaven forbid you ask a mainframe to do any of this.

  1. So we hear a lot about Big Data being great for unstructured data.  What about traditional data types that have been used in insurance forever?

CM: Traditional data types are very important to the industry – it drives our regulatory reporting and much of the performance management reporting. This data will continue to play a very important role in the insurance industry and for companies.

However, big data can now enrich that traditional data with new data sources for new insights. In areas such as customer service and product personalization, it can make the difference between cross-selling the right products to meet customer needs and losing the business. For commercial and group carriers, the new data provides the ability to better analyze risk needs, price accordingly and enable superior service in a highly competitive market.

JL: Traditional data will always be around. I doubt that I will outlive a mainframe installation at an insurer; which makes me a little sad. And for many rote tasks like financial reporting, a sales report, or a commission statement, those are sufficient. However, the business of insurance is changing in leaps and bounds. Innovators in data science are interested in correlating those traditional sources to other creative data to find new products, or areas to reduce risk. There is just a lot of data that is either ignored or locked in obscure systems that needs to be brought into the light. This data could be structured or unstructured, it doesn’t matter, and Big Data can assist there.

  1. How does this fit into an overall data management function?

JL: At the end of the day, a Hadoop cluster is another source of data for an insurer. More flexible, more cost effective and higher speed; but yet another data source for an insurer. So that’s one more on top of relational, cubes, content repositories, mainframes and whatever else insurers have latched onto over the years. So if it wasn’t completely obvious before, it should be now. Data needs to be managed. As data moves around the organization for consumption, it is shaped, cleaned, copied and we hope there is governance in place. And the Big Data installation is not exempt from any of these routines. In fact, one could argue that it is more critical to leverage good data management practices with Big Data not only to optimize the environment but also to eventually replace traditional data structures that just aren’t working.

CM: Insurance companies are blending new and old data and looking for the best ways to leverage “all data”. We are witnessing the development of a new generation of advanced analytical applications to take advantage of the volume, velocity, and variety in big data. We can also enhance current predictive models, enriching them with the unstructured information in claim and underwriting notes or diaries along with other external data.

There will be challenges. Insurance companies will still need to make important decisions on how to incorporate the new data into existing data governance and data management processes. The Chief Data or Chief Analytics officer will need to drive this business change in close partnership with IT.

  1. Tell me a little bit about how Informatica and Hortonworks are working together on this?

JL: For years Informatica has been helping our clients to realize the value in their data and analytics. And while enjoying great success in partnership with our clients, unlocking the full value of data requires new structures, new storage and something that doesn’t break the bank for our clients. So Informatica and Hortonworks are on a continuing journey to show that value in analytics comes with strong relationships between the Hadoop distribution and innovative market leading data management technology. As the relationship between Informatica and Hortonworks deepens, expect to see even more vertically relevant solutions and documented ROI for the Informatica/Hortonworks solution stack.

CM: Informatica and Hortonworks optimize the entire big data supply chain on Hadoop, turning data into actionable information to drive business value. By incorporating data management services into the data lake, companies can store and process massive amounts of data across a wide variety of channels including social media, clickstream data, server logs, customer transactions and interactions, videos, and sensor data from equipment in the field.

Matching data from internal sources (e.g. very granular data about customers) with external data (e.g. weather data or driving patterns in specific geographic areas) can unlock new revenue streams.

See this video for a discussion on unlocking those new revenue streams. Sanjay Krishnamurthi, Informatica CTO, and Shaun Connolly, Hortonworks VP of Corporate Strategy, share their perspectives.

  1. Do you have any additional comments on the future of data in this brave new world?

CM: My perspective is that, over time, we will drop the reference to “big” or ”small” data and get back to referring simply to “Data”. The term big data has been useful to describe the growing awareness on how the new data types can help insurance companies grow.

We can no longer use “traditional” methods to gain insights from data. Insurers need a modern data architecture to store, process and analyze data—transforming it into insight.

We will see an increase in new market entrants in the insurance industry, and existing insurance companies will improve their products and services based upon the insights they have gained from their data, regardless of whether that was “big” or “small” data.

JL: I’m sure that even now there is someone locked in their mother’s basement playing video games and trying to come up with the next data storage wave. So we have that to look forward to, and I’m sure it will be cool. But, if we are honest with ourselves, we’ll admit that we really don’t know what to do with half the data that we have. So while data storage structures are critical, the future holds even greater promise for new models, better analytical tools and applications that can make sense of all of this and point insurers in new directions. The trend that won’t change anytime soon is the ongoing need for good quality data, data ready at a moment’s notice, safe and secure and governed in a way that insurers can trust what those cool analytics show them.

Please join us for an interactive discussion on March 25th at 10am Pacific Time/ 1pm Eastern Time.

Register for the Webinar on March 25th at 10am Pacific/ 1pm Eastern

Share
Posted in Big Data, Data Quality, Financial Services, Hadoop | Tagged , , , , | Leave a comment

Building an Impactful Data Governance – One Step at a Time

Let’s face it, building a Data Governance program is no overnight task.  As one CDO puts it:  ”data governance is a marathon, not a sprint”.  Why? Because data governance is a complex business function that encompasses technology, people and process, all of which have to work together effectively to ensure the success of the initiative.  Because of the scope of the program, Data Governance often calls for participants from different business units within an organization, and it can be disruptive at first.

Why bother then?  Given that data governance is complex, disruptive, and could potentially introduce additional cost to a company?  Well, the drivers for data governance can vary for different organizations.  Let’s take a close look at some of the motivations behind data governance program.

For companies in heavily regulated industries, establishing a formal data governance program is a mandate.  When a company is not compliant, consequences can be severe. Penalties could include hefty fines, brand damage, loss in revenue, and even potential jail time for the person who is held accountable for being noncompliance. In order to meet the on-going regulatory requirements, adhere to data security policies and standards, companies need to rely on clean, connected and trusted data to enable transparency, auditability in their reporting to meet mandatory requirements and answer critical questions from auditors.  Without a dedicated data governance program in place, the compliance initiative could become an on-going nightmare for companies in the regulated industry.

A data governance program can also be established to support customer centricity initiative. To make effective cross-sells and ups-sells to your customers and grow your business,  you need clear visibility into customer purchasing behaviors across multiple shopping channels and touch points. Customer’s shopping behaviors and their attributes are captured by the data, therefore, to gain thorough understanding of your customers and boost your sales, a holistic Data Governance program is essential.

Other reasons for companies to start a data governance program include improving efficiency and reducing operational cost, supporting better analytics and driving more innovations. As long as it’s a business critical area and data is at the core of the process, and the business case is loud and sound, then there is a compelling reason for launching a data governance program.

Now that we have identified the drivers for data governance, how do we start?  This rather loaded question really gets into the details of the implementation. A few critical elements come to consideration including: identifying and establishing various task forces such as steering committee, data governance team and business sponsors; identifying roles and responsibilities for the stakeholders involved in the program; defining metrics for tracking the results.  And soon you will find that on top of everything, communications, communications and more communications is probably the most important tactic of all for driving the initial success of the program.

A rule of thumb?  Start small, take one-step at a time and focus on producing something tangible.

Sounds easy, right? Think this is easy?!Well, let’s hear what the real-world practitioners have to say. Join us at this Informatica webinar to hear Michael Wodzinski, Director of Information Architecture, Lisa Bemis, Director of Master Data, Fabian Torres, Director of Project Management from Houghton Mifflin Harcourt, global leader in publishing, as well as David Lyle, VP of product strategy from Informatica to discuss how to implement  a successful data governance practice that brings business impact to an enterprise organization.

If you are currently kicking the tires on setting up data governance practice in your organization,  I’d like to invite you to visit a member-only website dedicated to Data Governance:  http://governyourdata.com/. This site currently has over 1,000 members and is designed to foster open communications on everything data governance. There you will find conversations on best practices, methodologies, frame works, tools and metrics.  I would also encourage you to take a data governance maturity assessment to see where you currently stand on the data governance maturity curve, and compare the result against industry benchmark.  More than 200 members have taken the assessment to gain better understanding of their current data governance program,  so why not give it a shot?

Governyourdata.com

Governyourdata.com

Data Governance is a journey, likely a never-ending one.  We wish you best of the luck on this effort and a joyful ride! We love to hear your stories.

Share
Posted in Big Data, Data Governance, Data Integration, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , , , , , , , , , | 1 Comment

Great Data Increases Value and De-Risks the Drone

great data

Great Data Increases Value and De-Risks the Drone

At long last, the anxiously awaited rules from the FAA have brought some clarity to the world of commercial drone use. Up until now, commercial drone use has been prohibited. The new rules, of course, won’t sit well with Amazon who would like to drop merchandise on your porch at all hours. But the rules do work really well for insurers who would like to use drones to service their policyholders. So now drones, and soon to be fleets of unmanned cars will be driving the roadways in any numbers of capacities. It seems to me to be an ambulance chaser’s dream come true. I mean who wouldn’t want some seven or eight figure payday from Google for getting rear-ended?

What about “Great Data”? What does that mean in the context of unmanned vehicles, both aerial and terrestrial? Let’s talk about two aspects. First, the business benefits of great data using unmanned drones.

An insurance adjuster or catastrophe responder can leverage an aerial drone to survey large areas from a central location. They will pin point the locations needing attention for further investigation. This is a common scenario that many insurers talk about when the topic of aerial drone use comes up. Second to that is the ability to survey damage in hard to reach locations like roofs or difficult terrain (like farmland). But this is where great data comes into play. Surveying, service and use of unmanned vehicles demands that your data can answer some of the following questions for your staff operating in this new world:

Where am I?

Quality data and geocoded locations as part of that data is critical. In order to locate key risk locations, your data must be able to coordinate with the lat/long of the location recorded by your unmanned vehicles and the location of your operator. Ensure clean data through robust data quality practices.

Where are my policyholders?

Knowing the location of your policyholders not only relies on good data quality, but on knowing who they are and what risks you are there to help service. This requires a total customer relationship solution where you have a full view of not only locations, but risks, coverages and entities making up each policyholder.

What am I looking at?

Archived, current and work in process imaging is a key place where a Big Data environment can assist over time. By comparing saved images with new and processing claims, claims fraud and additional opportunities for service can be detected quickly by the drone operator.

Now that we’ve answered the business value questions and leveraged this new technology to better service policyholders and speed claims, let’s turn to how great data can be used to protect the insurer and drone operator from liability claims. This is important. The FAA has stopped short of requiring commercial drone operators to carry special liability insurance, leaving that instead to the drone operators to orchestrate with their insurer. And now we’re back to great data. As everyone knows, accidents happen. Technology, especially robotic mobile technology is not infallible. Something will crash somewhere, hopefully not causing injury or death, but sadly that too will likely happen. And there is nothing that will keep the ambulance chasers at bay more than robust great data. Any insurer offering liability cover for a drone operator should require that some of the following questions be answered by the commercial enterprise. And the interesting fact is that this information should be readily available if the business questions above have been answered.

  • Where was my drone?
  • What was it doing?
  • Was it functioning properly?

Properly using the same data management technology as in the previous questions will provide valuable data to be used as evidence in the case of liability against a drone operator. Insurers would be wise to ask these questions of their liability policyholders who are using unmanned technology as a way to gauge liability exposure in this brave new world. The key to the assessment of risk being robust data management and great data feeding the insurer’s unmanned policyholder service workers.

Time will tell all the great and imaginative things that will take place with this new technology. One thing is for certain. Great data management is required in all aspects from amazing customer service to risk mitigation in operations.  Happy flying to everyone!!

Share
Posted in Big Data, Data Quality, Financial Services | Tagged , , , , | Leave a comment

Payers – What They Are Good At, And What They Need Help With

healthcare_bigdata

Payers – What They Are Good At, And What They Need Help With

In our house when we paint a room, my husband does the big rolling of the walls or ceiling, I do the cut-in work. I am good at prepping the room, taping all the trim and deliberately painting the corners. However, I am thrifty and constantly concerned that we won’t have enough paint to finish a room. My husband isn’t afraid to use enough paint and is extremely efficient at painting a wall in a single even coat. As a result, I don’t do the big rolling and he doesn’t do the cutting in. It took us awhile to figure this out, and a few rooms had to be repainted while we were figuring it out.  Now we know what we are good at, and what we need help with.

Payers roles are changing. Payers were previously focused on risk assessment, setting and collecting premiums, analyzing claims and making payments – all while optimizing revenues. Payers are pretty good at selling to employers, figuring out the cost/benefit ratio from an employers perspective and ensuring a good, profitable product. With the advent of the Affordable Healthcare Act along with a much more transient insured population, payers now must focus more on the individual insured and be able to communicate with the individuals in a more nimble manner than in the past.

Individual members will shop for insurance based on consumer feedback and price. They are interested in ease of enrollment and the ability to submit and substantiate claims quickly and intuitively. Payers are discovering that they need to help manage population health at a individual member level. And population health management requires less of a business-data analytics approach and more social media and gaming-style logic to understand patients. In this way, payers can help develop interventions to sustain behavioral changes for better health.

When designing such analytics, payers should consider the following key design steps:

Due to payers’ mature predictive analytics competencies, they will have a much easier time in the next generation of population behavior compared to their provider counterparts. As clinical content is often unstructured compared to the claims data, payers need to pay extra attention to context and semantics when deciphering clinical content submitted by providers. Payers can use help from vendors that can help them understand unstructured data, individual members. They can then use that data to create fantastic predictive analytic solutions.

Share
Posted in 5 Sales Plays, Application Retirement, Big Data, Cloud Application Integration, Cloud Data Integration, Customer Acquisition & Retention, Customer Services, Data Governance, Data Quality, Governance, Risk and Compliance, Healthcare, Total Customer Relationship | Tagged | Leave a comment

Asia-Pacific Ecommerce surpassed Europe and North America

With a total B2C e-commerce turnover of $567.3bn in 2013, Asia-Pacific was the strongest e-commerce region in the world in 2013, as it surpassed Europe ($482.3bn) and North America ($452.4bn). Online sales in Asia-Pacific expected to have reached $799.2 billion in 2014, due to latest report from the Ecommerce Foundation.

Revenue: China, followed by Japan and Australia
As a matter of fact, China was the second-largest e-commerce market in the world, only behind the US ($419.0 billion), and for 2014 it is estimated that China even surpassed the US ($537.0 billion vs. $456.0 billion). In terms of B2C e-commerce turnover, Japan ($136.7 billion) ranked second, followed by Australia ($35.7 billion), South Korea ($20.2 billion) and India ($10.7 billion).

On average, Asian-Pacific e-shoppers spent $1,268 online in 2013
Ecommerce Europe’s research reveals that 235.7 million consumers in Asia-Pacific purchased goods and services online in 2013. On average, APAC online consumers each spent $1,268 online in 2013. This is slightly lower than the global average of $1,304. At $2,167, Australian e-shopper were the biggest spenders online, followed by the Japanese ($1,808) and China ($1,087).

Mobile: Japan and Australia lead the pack
In the frequency of mobile purchasing Japan shows the highest adoption, followed by Japan. An interesting fact is that 50% of transactions are done at home, 20% at work and 10% on the go.

frequency mobile shopping APAC

You can download the full report here. What does this mean for your business opportunity? Read more on the omnichannel trends 2015, which are driving customer experience. Happy to discuss @benrund.

Share
Posted in CMO, Customer Acquisition & Retention, DaaS, Data Quality, Manufacturing, Master Data Management, PiM, Product Information Management, Retail | Tagged , , , , | Leave a comment

A True Love Quiz: Is Your Marketing Data Right For You?

questionnaire and computer mouseValentine’s Day is such a strange holiday.  It always seems to bring up more questions than answers.  And the internet always seems to have a quiz to find out the answer!  There’s the “Does he have a crush on you too – 10 simple ways to find out” quiz.  There’s the “What special gift should I get her this Valentine’s Day?” quiz.  And the ever popular “Why am I still single on Valentine’s Day?” quiz.

Well Marketers, it’s your lucky Valentine’s Day!  We have a quiz for you too!  It’s about your relationship with data.  Where do you stand?  Are you ready to take the next step?


Question 1:  Do you connect – I mean, really connect – with your data?
Connect My Data□ (A) Not really.  We just can’t seem to get it together and really connect.
□ (B) Sometimes.  We connect on some levels, but there are big gaps.
□ (C) Most of the time.  We usually connect, but we miss out on some things.
□ (D) We are a perfect match!  We connect about everything, no matter where, no matter when.

Translation:  Data ready marketers have access to the best possible data, no matter what form it is in, no matter what system it is in.  They are able to make decisions based everything the entire organization “knows” about their customer/partner/product – with a complete 360 degree view. And they are also able to connect to and integrate with data outside the bounds of their organization to achieve the sought-after 720 degree view.  They can integrate and react to social media comments, trends, and feedback – in real time – and to match it with an existing record whenever possible. And they can quickly and easily bring together any third party data sources they may need.


Question 2:  How good looking & clean is you data?
My Data is So Good Looking□ (A) Yikes, not very. But it’s what’s on the inside that counts right?
□ (B) It’s ok.  We’ve both let ourselves go a bit.
□ (C) It’s pretty cute.  Not supermodel hot, but definitely girl or boy next door cute.
□ (D) My data is HOT!  It’s perfect in every way!

Translation: Marketers need data that is reliable and clean. According to a recent Experian study, American companies believe that 25% of their data is inaccurate, the rest of the world isn’t much more confident. 90% of respondents said they suffer from common data errors, and 78% have problems with the quality of the data they gather from disparate channels.  Making marketing decisions based upon data that is inaccurate leads to poor decisions.  And what’s worse, many marketers have no idea how good or bad their data is, so they have no idea what impact it is having on their marketing programs and analysis.  The data ready marketer understands this and has a top tier data quality solution in place to make sure their data is in the best shape possible.


Question 3:  Do you feel safe when you’re with your data?
I Heart Safe Data□ (A) No, my data is pretty scary.  911 is on speed dial.
□ (B) I’m not sure actually. I think so?
□ (C) My date is mostly safe, but it’s got a little “bad boy” or “bad girl” streak.
□ (D) I protect my data, and it protects me back.  We keep each other safe and secure.

Translation: Marketers need to be able to trust the quality of their data, but they also need to trust the security of their data.  Is it protected or is it susceptible to theft and nefarious attacks like the ones that have been all over the news lately?  Nothing keeps a CMO and their PR team up at night like worrying they are going to be the next brand on the cover of a magazine for losing millions of personal customer records. But beyond a high profile data breach, marketers need to be concerned over data privacy.  Are you treating customer data in the way that is expected and demanded?  Are you using protected data in your marketing practices that you really shouldn’t be?  Are you marketing to people on excluded lists


Question 4:  Is your data adventurous and well-traveled, or is it more of a “home-body”?
Home is where my data is□ (A) My data is all over the place and it’s impossible to find.
□ (B) My data is all in one place.  I know we’re missing out on fun and exciting options, but it’s just easier this way.
□ (C) My data is in a few places and I keep fairly good tabs on it. We can find each other when we need to, but it takes some effort.
□ (D) My data is everywhere, but I have complete faith that I can get ahold of any source I might need, when and where I need it.

Translation: Marketing data is everywhere. Your marketing data warehouse, your CRM system, your marketing automation system.  It’s throughout your organization in finance, customer support, and sale systems. It’s in third party systems like social media and data aggregators. That means it’s in the cloud, it’s on premise, and everywhere in between.  Marketers need to be able to get to and integrate data no matter where it “lives”.


Question 5:  Does your data take forever to get ready when it’s time to go do so something together?
My data is ready on time□ (A) It takes forever to prepare my data for each new outing.  It’s definitely not “ready to go”.
□ (B) My data takes it’s time to get ready, but it’s worth the wait… usually!
□ (C) My data is fairly quick to get ready, but it does take a little time and effort.
□ (D) My data is always ready to go, whenever we need to go somewhere or do something.

Translation:  One of the reasons many marketers end up in marketing is because it is fast paced and every day is different. Nothing is the same from day-to-day, so you need to be ready to act at a moment’s notice, and change course on a dime.  Data ready marketers have a foundation of great data that they can point at any given problem, at any given time, without a lot of work to prepare it.  If it is taking you weeks or even days to pull data together to analyze something new or test out a new hunch, it’s too late – your competitors have already done it!


Question 6:  Can you believe the stories your data is telling you?
My data tells the truth□ (A) My data is wrong a lot.  It stretches the truth a lot, and I cannot rely on it.
□ (B) I really don’t know.  I question these stories – dare I say excused – but haven’t been able to prove it one way or the other.
□ (C) I believe what my data says most of the time. It rarely lets me down.
□ (D) My data is very trustworthy.  I believe it implicitly because we’ve earned each other’s trust.

Translation:  If your data is dirty, inaccurate, and/or incomplete, it is essentially “lying” to you. And if you cannot get to all of the data sources you need, your data is telling you “white lies”!  All of the work you’re putting into analysis and optimization is based on questionable data, and is giving you questionable results.  Data ready marketers understand this and ensure their data is clean, safe, and connected at all times.


Question 7:  Does your data help you around the house with your daily chores?
My data helps me out□ (A) My data just sits around on the couch watching TV.
□ (B) When I nag my data will help out occasionally.
□ (C) My data is pretty good about helping out. It doesn’t take imitative, but it helps out whenever I ask.
□ (D) My data is amazing.  It helps out whenever it can, however it can, even without being asked.

Translation:  Your marketing data can do so much. It should enable you be “customer ready” – helping you to understand everything there is to know about your customers so you can design amazing personalized campaigns that speak directly to them.  It should enable you to be “decision ready” – powering your analytics capabilities with great data so you can make great decisions and optimize your processes.  But it should also enable you to be “showcase ready” – giving you the proof points to demonstrate marketing’s actual impact on the bottom line.


Now for the fun part… It’s time to rate your  data relationship status
If you answered mostly (A):  You have a rocky relationship with your data.  You may need some data counseling!

If you answered mostly (B):  It’s time to decide if you want this data relationship to work.  There’s hope, but you’ve got some work to do.

If you answered mostly (C):  You and your data are at the beginning of a beautiful love affair.  Keep working at it because you’re getting close!

If you answered mostly (D): Congratulations, you have a strong data marriage that is based on clean, safe, and connected data.  You are making great business decisions because you are a data ready marketer!


Do You Love Your Data?
Learn to love your dataNo matter what your data relationship status, we’d love to hear from you.  Please take our survey about your use of data and technology.  The results are coming out soon so don’t miss your chance to be a part.  https://www.surveymonkey.com/s/DataMktg

Also, follow me on twitter – The Data Ready Marketer – for some of the latest & greatest news and insights on the world of data ready marketing.  And stay tuned because we have several new Data Ready Marketing pieces coming out soon – InfoGraphics, eBooks, SlideShares, and more!

Share
Posted in 5 Sales Plays, Business Impact / Benefits, CMO, Customers, Data First, Data Integration, Data masking, Data Privacy, Data Quality, Data Security, Intelligent Data Platform, Master Data Management, Operational Efficiency, Total Customer Relationship | Tagged , , , , , , , , | Leave a comment

How to Ace Application Migration & Consolidation (Hint: Data Management)

Myth Vs Reality: Application Migration & Consolidation

Myth Vs Reality: Application Migration & Consolidation (No, it’s not about dating)

Will your application consolidation or migration go live on time and on budget?  According to Gartner, “through 2019, more than 50% of data migration projects will exceed budget and/or result in some form of business disruption due to flawed execution.”1  That is a scary number by any measure. A colleague of mine put it well: ‘I wouldn’t get on a plane that had 50% chance of failure’. So should you be losing sleep over your migration or consolidation project? Well that depends.  Are you the former CIO of Levi Strauss? Who, according to Harvard Business Review, was forced to resign due to a botched SAP migration project and a $192.5 million earnings write-off?2  If so, perhaps you would feel a bit apprehensive. Otherwise, I say you can be cautiously optimistic, if you go into it with a healthy dose of reality. Please ensure you have a good understanding of the potential pitfalls and how to address them.  You need an appreciation for the myths and realities of application consolidation and migration.

First off, let me get one thing off my chest.  If you don’t pay close attention to your data, throughout the application consolidation or migration process, you are almost guaranteed delays and budget overruns. Data consolidation and migration is at least 30%-40% of the application go-live effort. We have learned this by helping customers deliver over 1500 projects of this type.  What’s worse, if you are not super meticulous about your data, you can be assured to encounter unhappy business stakeholders at the end of this treacherous journey. The users of your new application expect all their business-critical data to be there at the end of the road. All the bells and whistles in your new application will matter naught if the data falls apart.  Imagine if you will, students’ transcripts gone missing, or your frequent-flyer balance a 100,000 miles short!  Need I say more?  Now, you may already be guessing where I am going with this.  That’s right, we are talking about the myths and realities related to your data!   Let’s explore a few of these.

Myth #1: All my data is there.

Reality #1: It may be there… But can you get it? if you want to find, access and move out all the data from your legacy systems, you must have a good set of connectivity tools to easily and automatically find, access and extract the data from your source systems. You don’t want to hand-code this for each source.  Ouch!

Myth #2: I can just move my data from point A to point B.

Reality #2: You can try that approach if you want.  However you might not be happy with the results.  Reality is that there can be significant gaps and format mismatches between the data in your legacy system and the data required by your new application. Additionally you will likely need to assemble data from disparate systems. You need sophisticated tools to profile, assemble and transform your legacy data so that it is purpose-fit for your new application.

Myth #3: All my data is clean.

Reality #3:  It’s not. And here is a tip:  better profile, scrub and cleanse your data before you migrate it. You don’t want to put a shiny new application on top of questionable data . In other words let’s get a fresh start on the data in your new application!

Myth #4: All my data will move over as expected

Reality #4: It will not.  Any time you move and transform large sets of data, there is room for logical or operational errors and surprises.  The best way to avoid this is to automatically validate that your data has moved over as intended.

Myth #5: It’s a one-time effort.

Reality #5: ‘Load and explode’ is formula for disaster.  Our proven methodology recommends you first prototype your migration path and identify a small subset of the data to move over. Then test it, tweak your model, try it again and gradually expand.  More importantly, your application architecture should not be a one-time effort.  It is work in progress and really an ongoing journey.  Regardless of where you are on this journey, we recommend paying close attention to managing your application’s data foundation.

As you can see, there is a multitude of data issues that can plague an application consolidation or migration project and lead to its doom.  These potential challenges are not always recognized and understood early on.  This perception gap is a root-cause of project failure. This is why we are excited to host Philip Russom, of TDWI, in our upcoming webinar to discuss data management best practices and methodologies for application consolidation and migration. If you are undertaking any IT modernization or rationalization project, such as consolidating applications or migrating legacy applications to the cloud or to ‘on-prem’ application, such as SAP, this webinar is a must-see.

So what’s your reality going to be like?  Will your project run like a dream or will it escalate into a scary nightmare? Here’s hoping for the former.  And also hoping you can join us for this upcoming webinar to learn more:

Webinar with TDWI:
Successful Application Consolidation & Migration: Data Management Best Practices.

Date: Tuesday March 10, 10 am PT / 1 pm ET

Don’t miss out, Register Today!

1) Gartner report titled “Best Practices Mitigate Data Migration Risks and Challenges” published on December 9, 2014

2) Harvard Business Review: ‘Why your IT project may be riskier than you think’.

Share
Posted in Data Integration, Data Migration, Data Quality, Enterprise Data Management | Tagged , , , , , , , , , , , , , | 2 Comments

Healthcare’s Love Hate Relationship with Data

Healthcare's Love Hate Relationship with Data

Healthcare’s Love Hate Relationship with Data

Healthcare and data have the makings of an epic love affair, but like most relationships, it’s not all roses. Data is playing a powerful role in finding a cure for cancer, informing cost reduction, targeting preventative treatments and engaging healthcare consumers in their own care. The downside? Data is needy. It requires investments in connectedness, cleanliness and safety to maximize its potential.

  • Data is ubiquitous…connect it.

4400 times the amount of information held at the Library of Congress – that’s how much data Kaiser Permanente alone has generated from its electronic medical record. Kaiser successfully makes every piece of information about each patient available to clinicians, including patient health history, diagnosis by other providers, lab results and prescriptions. As a result, Kaiser has seen marked improvements in outcomes: 26% reduction in office visits per member and a 57% reduction in medication errors.

Ongoing value, however, requires continuous investment in data. Investments in data integration and data quality ensure that information from the EMR is integrated with other sources (think claims, social, billing, supply chain) so that clinicians and decision makers have access in the format they need. Without this, self-service intelligence can be inhibited by duplicate data, poor quality data or application silos.

  • Data is popular…ensure it is clean.

Healthcare leaders can finally rely on electronic data to make strategic decisions. A CHRISTUS Health anecdote you might relate to – In a weekly meeting each executive reviews their strategic dashboard; these dashboards drive strategic decision making about CPOE adoption (computerized physician order entry), emergency room wait times and price per procedure. Powered by enterprise information management, these dashboards paint a reliable and consistent view across the system’s 60 hospitals. Previous to the implementation of an enterprise data platform, each executive was reliant on their own set of data.

In the pre-data investment era, seemingly common data elements from different sources did not mean the same thing. For example, “Admit Date” in one report reflected the emergency department admission date whereas “Admit Date” in another report referred to the inpatient admission date.

  •  Sharing data is necessary…make it safe.

To cure cancer, reduce costs and engage patients, care providers need access to data and not just the data they generate; it has to be shared for coordination of care through transitions of care and across settings, i.e. home care, long term care and behavioral health. Fortunately, Consumers and Clinicians agree on this, PWC reports that 56% of consumers and 30% of physicians are comfortable with data sharing for care coordination. Further progress is demonstrated by healthcare organizations willingly adopting cloud based applications –as of 2013, 40% of healthcare organizations were already storing protected health information (PHI) in the cloud.

Increased data access carries risk, leaving health data exposed, however. The threat of data breach or hacking is multiplied by the presence (in many cases necessary) of PHI on employee laptops and the fact that providers are provided increased access to PHI. Ponemon Institute, a security firm estimates that data breaches cost the industry $5.6 billion each year. Investments in data-centric security are necessary to assuage fear, protect personal health data and make secure data sharing a reality.

Early improvements in patient outcomes indicate that the relationship between data and healthcare is a valuable investment. The International Institute of Analytics supports this, reporting that although analytics and data maturity across healthcare lags other industries, the opportunity to positively impact clinical and operational outcomes is significant.

Share
Posted in Big Data, Data Integration, Data Quality, Healthcare, Master Data Management, Mergers and Acquisitions, Operational Efficiency | Tagged , | Leave a comment