Category Archives: Business Impact / Benefits

Marketing in a Data-Driven World… From Mad Men to Mad Scientist

Are Marketers More Mad Men or Mad Scientists?

I have been in marketing for over two decades. As I meet people in social situations, on airplanes, and on the sidelines at children’s soccer games, and they ask what it is I do, I get responses that constantly amuse me and lead me to the conclusion that the general public has absolutely no idea what a marketer does. I am often asked things like “have you created any commercials that I might have seen?” and peppered with questions that evoke visions of Mad Men-esque 1960’s style agency work and late night creative martini-filled pitch sessions.

I admit I do love to catch the occasional Mad Men episode, and a few weeks ago, I stumbled upon one that had me chuckling. You may remember the one that Don Draper is pitching a lipstick advertisement and after persuading the executive to see things his way, he says something along the lines of, “We’ll never know, will we? It’s not a science.”

How the times have changed. I would argue that in today’s data-driven world, marketing is no longer an art and is now squarely a science.

Sure, great marketers still understand their buyers at a gut level, but their hunches are no longer the impetus of a marketing campaign. Their hunches are now the impetus for a data-driven, fact-finding mission, and only after the analysis has been completed and confirms or contradicts this hunch, is the campaign designed and launched.

This is only possible because today, marketers have access to enormous amounts of data – not just the basic demographics of years past. Most marketers realize that there is great promise in all of that data, but it’s just too complicated, time-consuming, and costly to truly harness it. How can you really ever make sense of the hundreds of data sources and tens of thousands of variables within these sources? Social media, web analytics, geo-targeting, internal customer and financial systems, in house marketing automation systems, third party data augmentation in the cloud… the list goes on and on!

How can marketers harness the right data, in the right way, right away? The answer starts with making the commitment that your marketing team – and hopefully your organization as a whole – will think “data first”. In the coming weeks, I will focus on what exactly thinking data first means, and how it will pay dividends to marketers.

In the mean time, I will make the personal commitment to be more patient about answering the silly questions and comments about marketers.

Now, it’s your turn to comment… 

What are some of the most amusing misconceptions about marketers that you’ve encountered?

- and -

Do you agree? Is marketing an art? A science? Or somewhere in between?

Are Marketers More Mad Men or Mad Scientists?

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, CMO, Operational Efficiency | Tagged , , , , , , , | Leave a comment

To Engage Business, Focus on Information Management rather than Data Management

Focus on Information Management

Focus on Information Management

IT professionals have been pushing an Enterprise Data Management agenda for decades rather than Information Management and are frustrated with the lack of business engagement. So what exactly is the difference between Data Management and Information Management and why does it matter? (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Architects, Business Impact / Benefits, Business/IT Collaboration, CIO, Data Governance, Data Integration, Enterprise Data Management, Integration Competency Centers, Master Data Management | Tagged , , , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 2

underwriting data qualityDid I really compare data quality to flushing toilet paper?  Yeah, I think I did.  Makes me laugh when I read that, but still true.  And yes, I am still playing with more data.  This time it’s a location schedule for earthquake risk.  I see a 26-story structure with a building value of only $136,000 built in who knows what year.  I’d pull my hair out if it weren’t already shaved off.

So let’s talk about the six steps for data quality competency in underwriting.  These six steps are standard in the enterprise.  But, what we will discuss is how to tackle these in insurance underwriting.  And more importantly, what is the business impact to effective adoption of the competency.  It’s a repeating self-reinforcing cycle.  And when done correctly can be intelligent and adaptive to changing business needs.

Profile – Effectively profile and discover data from multiple sources

We’ll start at the beginning, a very good place to start.  First you need to understand your data.  Where is it from and in what shape does it come?  Whether internal or external sources, the profile step will help identify the problem areas.  In underwriting, this will involve a lot of external submission data from brokers and MGAs.  This is then combined with internal and service bureau data to get a full picture of the risk.  Identify you key data points for underwriting and a desired state for that data.  Once the data is profiled, you’ll get a very good sense of where your troubles are.  And continually profile as you bring other sources online using the same standards of measurement.  As a side, this will also help in remediating brokers that are not meeting the standard.

Measure – Establish data quality metrics and targets

As an underwriter you will need to determine what is the quality bar for the data you use.  Usually this means flagging your most critical data fields for meeting underwriting guidelines.  See where you are and where you want to be.  Determine how you will measure the quality of the data as well as desired state.  And by the way, actuarial and risk will likely do the same thing on the same or similar data.  Over time it all comes together as a team.

Design – Quickly build comprehensive data quality rules

This is the meaty part of the cycle, and fun to boot.  First look to your desired future state and your critical underwriting fields.  For each one, determine the rules by which you normally fix errant data.  Like what you do when you see a 30-story wood frame structure?  How do you validate, cleanse and remediate that discrepancy?  This may involve fuzzy logic or supporting data lookups, and can easily be captured.  Do this, write it down, and catalog it to be codified in your data quality tool.  As you go along you will see a growing library of data quality rules being compiled for broad use.

Deploy – Native data quality services across the enterprise

Once these rules are compiled and tested, they can be deployed for reuse in the organization.  This is the beautiful magical thing that happens.  Your institutional knowledge of your underwriting criteria can be captured and reused.  This doesn’t mean just once, but reused to cleanse existing data, new data and everything going forward.  Your analysts will love you, your actuaries and risk modelers will love you; you will be a hero.

Review – Assess performance against goals

Remember those goals you set for your quality when you started?  Check and see how you’re doing.  After a few weeks and months, you should be able to profile the data, run the reports and see that the needle will have moved.  Remember that as part of the self-reinforcing cycle, you can now identify new issues to tackle and adjust those that aren’t working.  One metric that you’ll want to measure over time is the increase of higher quote flow, better productivity and more competitive premium pricing.

Monitor – Proactively address critical issues

Now monitor constantly.  As you bring new MGAs online, receive new underwriting guidelines or launch into new lines of business you will repeat this cycle.  You will also utilize the same rule set as portfolios are acquired.  It becomes a good way to sanity check the acquisition of business against your quality standards.

In case it wasn’t apparent your data quality plan is now more automated.  With few manual exceptions you should not have to be remediating data the way you were in the past.  In each of these steps there is obvious business value.  In the end, it all adds up to better risk/cat modeling, more accurate risk pricing, cleaner data (for everyone in the organization) and more time doing the core business of underwriting.  Imagine if you can increase your quote volume simply by not needing to muck around in data.  Imagine if you can improve your quote to bind ratio through better quality data and pricing.  The last time I checked, that’s just good insurance business.

And now for something completely different…cats on pianos.  No, just kidding.  But check here to learn more about Informatica’s insurance initiatives.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Enterprise Data Management, Financial Services | Tagged , , , , | Leave a comment

Conversations on Data Quality in Underwriting – Part 1

Data QualityI was just looking at some data I found.  Yes, real data, not fake demo stuff.  Real hurricane location analysis with modeled loss numbers.  At first glance, I thought it looked good.  There are addresses, latitudes/longitudes, values, loss numbers and other goodies like year built and construction codes.  Yes, just the sort of data that an underwriter would look at when writing a risk.  But after skimming through the schedule of locations a few things start jumping out at me.  So I dig deeper.  I see a multi-million dollar structure in Palm Beach, Florida with $0 in modeled loss.  That’s strange.  And wait, some of these geocode resolutions look a little coarse.  Are they tier one or tier two counties?  Who would know?  At least all of the construction and occupancy codes have values, albeit they look like defaults.  Perhaps it’s time to talk about data quality.

This whole concept of data quality is a tricky one.  As cost in acquiring good data is weighed against speed of underwriting/quoting and model correctness I’m sure some tradeoffs are made.  But the impact can be huge.  First, incomplete data will either force defaults in risk models and pricing or add mathematical uncertainty.  Second, massively incomplete data chews up personnel resources to cleanse and enhance.  And third, if not corrected, the risk profile will be wrong with potential impact to pricing and portfolio shape.  And that’s just to name a few.

I’ll admit it’s daunting to think about.  Imagine tens of thousands of submissions a month.  Schedules of thousands of locations received every day.  Can there even be a way out of this cave?  The answer is yes, and that answer is a robust enterprise data quality infrastructure.  But wait, you say, enterprise data quality is an IT problem.  Yeah, I guess, just like trying to flush an entire roll of toilet paper in one go is the plumber’s problem.  Data quality in underwriting is a business problem, a business opportunity and has real business impacts.

Join me in Part 2 as I outline the six steps for data quality competency in underwriting with tangible business benefits and enterprise impact.  And now that I have you on the edge of your seats, get smart about the basics of enterprise data quality.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Financial Services | Tagged , , , | Leave a comment

Health Plans, Create Competitive Differentiation with Risk Adjustment

improve risk adjustmentExploring Risk Adjustment as a Source of Competitive Differentiation

Risk adjustment is a hot topic in healthcare. Today, I interviewed my colleague, Noreen Hurley to learn more. Noreen tell us about your experience with risk adjustment.

Before I joined Informatica I worked for a health plan in Boston. I managed several programs  including CMS Five Start Quality Rating System and Risk Adjustment Redesign.  We recognized the need for a robust diagnostic profile of our members in support of risk adjustment. However, because the information resides in multiple sources, gathering and connecting the data presented many challenges. I see the opportunity for health plans to transform risk adjustment.

As risk adjustment becomes an integral component in healthcare, I encourage health plans to create a core competency around the development of diagnostic profiles. This should be the case for health plans and ACO’s.  This profile is the source of reimbursement for an individual. This profile is also the basis for clinical care management.  Augmented with social and demographic data, the profile can create a roadmap for successfully engaging each member.

Why is risk adjustment important?

Risk Adjustment is increasingly entrenched in the healthcare ecosystem.  Originating in Medicare Advantage, it is now applicable to other areas.  Risk adjustment is mission critical to protect financial viability and identify a clinical baseline for  members.

What are a few examples of the increasing importance of risk adjustment?

1)      Centers for Medicare and Medicaid (CMS) continues to increase the focus on Risk Adjustment. They are evaluating the value provided to the Federal government and beneficiaries.  CMS has questioned the efficacy of home assessments and challenged health plans to provide a value statement beyond the harvesting of diagnoses codes which result solely in revenue enhancement.   Illustrating additional value has been a challenge. Integrating data across the health plan will help address this challenge and derive value.

2)      Marketplace members will also require risk adjustment calculations.  After the first three years, the three “R’s” will dwindle down to one ‘R”.  When Reinsurance and Risk Corridors end, we will be left with Risk Adjustment. To succeed with this new population, health plans need a clear strategy to obtain, analyze and process data.  CMS processing delays make risk adjustment even more difficult.  A Health Plan’s ability to manage this information  will be critical to success.

3)      Dual Eligibles, Medicaid members and ACO’s also rely on risk management for profitability and improved quality.

With an enhanced diagnostic profile — one that is accurate, complete and shared — I believe it is possible to enhance care, deliver appropriate reimbursements and provide coordinated care.

How can payers better enable risk adjustment?

  • Facilitate timely analysis of accurate data from a variety of sources, in any  format.
  • Integrate and reconcile data from initial receipt through adjudication and  submission.
  • Deliver clean and normalized data to business users.
  • Provide an aggregated view of master data about members, providers and the relationships between them to reveal insights and enable a differentiated level of service.
  • Apply natural language processing to capture insights otherwise trapped in text based notes.

With clean, safe and connected data,  health plans can profile members and identify undocumented diagnoses. With this data, health plans will also be able to create reports identifying providers who would benefit from additional training and support (about coding accuracy and completeness).

What will clean, safe and connected data allow?

  • Allow risk adjustment to become a core competency and source of differentiation.  Revenue impacts are expanding to lines of business representing larger and increasingly complex populations.
  • Educate, motivate and engage providers with accurate reporting.  Obtaining and acting on diagnostic data is best done when the member/patient is meeting with the caregiver.  Clear and trusted feedback to physicians will contribute to a strong partnership.
  • Improve patient care, reduce medical cost, increase quality ratings and engage members.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Data Governance, Data Integration, Enterprise Data Management, Healthcare, Master Data Management, Operational Efficiency | Tagged , , | Leave a comment

Guest interview with Jorij Abraham: author of the first book about PIM and founder of E-commerce Foundation

Jorij AbrahamJorij Abraham is the founder of the E-commerce Foundation, a non-profit organization dedicated to helping organizations and industries improve their e-commerce activities. He advises companies on e-commerce strategy, Omnichannel development and product information management. He also works as Director Research & Advise for Ecommerce Europe.

He’s written a fine book about PIM but don’t expect a technical book at all! This is what marketing teams, merchandisers, product category teams, digital strategist, should be reading.

Like him or not, when he talks you’d better listen!

Michele: Let’s start with a view on the PIM market. Where are we globally?

Jorij: We are just starting. Most retailers still do not realize how important product information is to sell digital. In some countries the expectation is that in 2020 30 – 50% of all consumer goods are bought online. PIM no longer is an option. It is essential to be successful now and in the upcoming years.

M.: What was the main inspiration behind the book? It is definitely the first book about PIM but I am sure the motivation runs a bit deeper than that.

J.: The fact that there is very little in depth information available about the subject triggered me to write the book. However, I got a lot of help from experts from Unic and the different software vendors and was very happy with all the great research Heiler had already done in the area.

M.: Who should be reading your book?

J.: I wrote the book for a broad audience; managers, employees responsible for product information management, marketers, merchandisers, and even students! There are chapters covering the basics and how a PIM can help a company on a strategic, tactical and operational level. Few later chapters are devoted to helping product information officers implement a PIM system and choose the right PIM solution.

M.: I see that in your book you cover a good number of big PIM vendors. What is the future for those who target mid-market businesses?

J.: If you look at the overall market I think we will see a large shake out in the industry. We will have very big players like Amazon, Ebay, Walmart and lots of niche players. The medium sized business will have a difficult time to survive. All will be in need for a PIM system however.

M.: What’s your take on the different PIM vendors out there? I personally see different flavours of PIM such as those more commerce friendly, as opposed to those more ERP friendly, or just minimalist PIM solutions.  

J.: In the book I discuss several solutions. Some are for companies starting with PIM others are top of the line. Especially for larger firms with lots of product information to manage I recommend to make a larger investment. Low-end PIM solutions are a good choice if you expect your needs will remain simple. However if you know that within two or three years you will have 100.000 products, in multiple languages with lots of attributes, do not start with a simple solution. Within 1.5 years you will have to migrate again and the costs of migration are not worth the licence costs saved.

PIM Book Jorij AbrahamM.: In your view, what are the major inhibitors for PIM adoption?

J.: There are many strategic, tactical and operational benefits. Managers have difficulties understanding the ROI because it is indirect. PIM can improve traffic to your site, increase conversion ratio, and reduce returns.

M.: Would it be easier to promote PIM in combination to a WCMS platform? More generally, is there a case to promote PIM as part of a greater strategic thrust?

J.: I personally prefer systems which are great at doing what they are meant to do. However it very much depends on the needs of the company. Combining a PIM with a WCMS are mixing two solutions with very different goals. Hybris is an example of a complete solutions. If you want to buy everything at once, it is a good choice. However what I like very much about the Heiler/Informatica solution is that is great at doing what is says it does. Especially the user friendliness of the system is a big plus. Why? Because if a PIM fails it usually is because of the low user adaptation.

M.: What would you suggest to Australian retailers who are clearly reluctant to adopt PIM primarily because of limited local references (at least on large scale)?

J.: Retail in Australia is going the same way as everywhere else. Digital commerce will be a fact of life and a PIM is essential to be successful online. Look at the proof in the Asia, Europe and the USA. PIM is here to stay.

M.: Is PIM now what ERP was in the 90s and CRM at the beginning of the millennium? In other words, will it ever become a commodity?

J.: I think so. But we are really at the start of PIM. CRM is anno 2014 not yet really a part of most IT architectures. So we have a long way to go…

M.: Let’s talk about the influence exerted by analyst firms such as Gartner, Forrester, and Ventana. What’s your view on this? Are they moving the market? They put a lot of effort in trying to differentiate themselves. For example, see how Gartner MDM Quadrant for Products combine MDM and PIM players.

J.:I think the research agencies in general do not get PIM yet to the full extent. It is still a niche market and they are combining solutions which in my view is not helping the business and IT user. I have seen companies buy an MDM solution expecting to support their PIM processes. MDM is very different from PIM although its goals overlap. I often see that PIM has much more end-users, requires faster publication processes. There are only a few solutions in the market which really combine MDM and PIM in a sensible way.

M.: Looking at your book, I noticed that you spend a great deal of effort in unearthing what I’d call ‘PIM core concepts”. However, while the core concepts are stable, being a technology-enabled discipline PIM will undergo ongoing enhancements. What is your view on this?

J.: This is a tough question. In fact, few chapters in my book may go out of date soon. For example, PIM providers are popping up and it’s hard to keep up. On a more important note, I also see the following trends:

a) The cloud is going to have a fundamental impact on PIM solutions. It will hard to sell an on-premise solution to companies that are very much focused on their core business and outsourcing everything else (e.g. Retailers)

b) I see companies working much more intensively to collect and disseminate accurate product information. This is costly and operational inefficient if it is undertaken in isolation. In fact, there’s room to improve the overall supply chain by integrating product information across different parties, e.g. suppliers, manufactures, and retailers.

c) Finally, I see the emergence of the social as another key development in the PIM space. Just think about the contribute that consumers are providing when they shop online and share their experience on the social platforms or provide a product recommendation and/or ranking. This is product information and PIMs need to incorporate that in the overall product enrichment.

M.: Thank you Jorij. This has been a fantastic opportunity for me and my readers to learn more about you and the great work you are doing.

J.: It is great that you are putting so much effort in sharing information about product information management. Only in such a way companies can start to understand the value of PIM and increase both sales as well as reduce costs.

The book is available on Springer website, Amazon, Bookdepository, and many other book stores.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, CMO, PiM, Product Information Management | Tagged , , , , | Leave a comment

Who Has the Heart to Adopt this Orphan Oil Well?

As I browsed my BBC app a few weeks ago, I ran into this article about environmental contamination of oil wells in the UK, which were left to their own devices. The article explains that a lack of data and proper data management is causing major issues for gas and oil companies. In fact, researchers found no data for more than 2,000 inactive wells, many of which have been abandoned or “orphaned”(sealed and covered up). I started to scratch my head imagining what this problem looks like in places like Brazil, Nigeria, Malaysia, Angola and the Middle East. In these countries and regions, regulatory oversight is, on average, a bit less regulated.

Data Management

Like Oliver, this well needs a home!

On top of that, please excuse my cynicism here, but an “Orphan” well is just as ridiculous a concept as a “Dry” well.  A hole without liquid inside is not a well but – you guessed it – a hole.  Also, every well has a “Parent”, meaning

  • The person or company who drilled it
  • A  land owner who will get paid from its production and allowed the operation (otherwise it would be illegal)
  • A financier who fronted the equipment and research cost
  • A regulator, who is charged with overseeing the reservoir’s exploration

Let the “hydrocarbon family court judge” decide whose problem this orphan is with well founded information- no pun intended.  After all, this “domestic disturbance” is typically just as well documented as any police “house call”, when you hear screams from next door. Similarly, one would expect that when (exploratory) wells are abandoned and improperly capped or completed, there is a long track record about financial or operational troubles at the involved parties.  Apparently I was wrong.  Nobody seems to have a record of where the well actually was on the surface, let alone subsurface, to determine perforation risks in itself or from an actively managed bore nearby.

This reminds me of a meeting with an Asian NOC’s PMU IT staff, who vigorously disagreed with every other department on the reality on the ground versus at group level. The PMU folks insisted on having fixed all wells’ key attributes:

  1. Knowing how many wells and bores they had across the globe and all types of commercial models including joint ventures
  2. Where they were and are today
  3. What their technical characteristics were and currently are

The other departments, from finance to strategy, clearly indicated that 10,000 wells across the globe currently being “mastered” with (at least initially) cheap internal band aid fixes has a margin of error of up to 10%.   So much for long term TCO.  After reading this BBC article, this internal disagreement made even more sense.

If this chasm does not make a case for proper mastering of key operational entities, like wells, I don’t know what does. It also begs the question how any operation with potentially very negative long term effects can have no legally culpable party being capture in some sort of, dare I say, master register.  Isn’t this the sign of “rule of law” governing an advanced nation, e.g. having a land register, building permits, wills, etc.?

I rest my case, your honor.  May the garden ferries forgive us for spoiling their perfectly manicured lawn.  With more fracking and public scrutiny on the horizon, maybe regulators need to establish their own “trusted” well master file, rather than rely on oil firms’ data dumps.  After all, the next downhole location may be just a foot away from perforating one of these “orphans” setting your kitchen sink faucet on fire.

Do you think another push for local government to establish “well registries” like they did ten years ago for national IDs, is in order?

Disclaimer: Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations and benchmarks.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warranty or representation of success, either express or implied, is made.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Master Data Management | Tagged , , | Leave a comment

A Data-Driven Healthcare Culture is Foundational to Delivering Personalized Medicine in Healthcare

According to a recent article in the LA Times, healthcare costs in the United States far exceed costs in other countries. For example, heart bypass surgery costs an average of $75,345 in the U.S. compared to $15,742 in the Netherlands and $16,492 in Argentina. In the U.S. healthcare accounts for 18% of the U.S. GDP and is increasing. 

Michelle Blackmer is an healthcare industry expert at Informatica

Michelle Blackmer is an healthcare industry expert at Informatica

Michelle Blackmer is an healthcare industry expert at Informatica. In this interview, she explains why business as usual isn’t good enough anymore. Healthcare organizations are rethinking how they do business in an effort to improve outcomes, reduce costs, and comply with regulatory pressures such as the Affordable Care Act (ACA). Michelle believes a data-driven healthcare culture is foundational to personalized medicine and discusses the importance of clean, safe and connected data in executing a successful transformation.

Q. How is the healthcare industry responding to the rising costs of healthcare?
In response to the rising costs of healthcare, regulatory pressures (i.e. Affordable Care Act (ACA)), and the need to better patient outcomes at lower costs, the U.S. healthcare industry is transforming from a volume-based to a value-based model. In this new model, healthcare organizations need to invest in delivering personalized medicine.

To appreciate the potential of personalized medicine, think about your own healthcare experience. It’s typically reactive. You get sick, you go to the doctor, the doctor issues a prescription and you wait a couple of days to see if that drug works. If it doesn’t, you call the doctor and she tries another drug. This process is tedious, painful and costly.

Now imagine if you had a chronic disease like depression or cancer. On average, any given prescription drug only works for half of those who take it. Among cancer patients, the rate of ineffectiveness jumps to 75 percent. Anti-depressants are effective in only 62 percent of those who take them.

Video: MD Anderson Cancer CenterOrganizations like MD Anderson and UPMC aim to put an end to cancer. They are combining scientific research with access to clean, safe and connected data (data of all types including genomic data). The insights revealed will empower personalized chemotherapies. Personalized medicine offers customized treatments based on patient history and best practices. Personalized medicine will transform healthcare delivery. Click on the links to watch videos about their transformational work.

Q. What role does data play in enabling personalized medicine?
Data is foundational to value-based care and personalized medicine. Not just any data will do. It needs to be clean, safe and connected data. It needs to be delivered rapidly across hallways and across networks.

As an industry, healthcare is at a stage where meaningful electronic data is being generated. Now you need to ensure that the data is accessible and trustworthy so that it can be rapidly analyzed. As data is aggregated across the ecosystem, married with financial and genomic data, data quality issues become more obvious. It’s vital that you can define the data issues so the people can spend their time analyzing the data to gain insights instead of wading through and manually resolving data quality issues.

The ability to trust data will differentiate leaders from the followers. Leaders will advance personalized medicine because they rely on clean, safe and connected data to:

1)      Practice analytics as a core competency
2)      Define evidence, deliver best practice care and personalize medicine
3)      Engage patients and collaborate to foster strong, actionable relationships

Healthcare e-bookTake a look at this Healthcare eBook for more on this topic: Potential Unlocked: Transforming Healthcare by Putting Information to Work.

Q. What is holding healthcare organizations back from managing their healthcare data like other mission-critical assets?
When you say other mission-critical assets, I think of facilitates, equipment, etc. Each of these assets has people and money assigned to manage and maintain them. The healthcare organizations I talk to who are highly invested in personalized medicine recognize that data is mission-critical. They are investing in the people, processes and technology needed to ensure data is clean, safe and connected. The technology includes data integration, data quality and master data management (MDM).

What’s holding other healthcare organizations back is that while they realize they need data governance, they wrongly believe they need to hire big teams of “data stewards” to be successful. In reality, you don’t need to hire a big team. Use the people you already have doing data governance. You may not have made this a formal part of their job description and they might not have data governance technologies yet, but they do have the skillset and they are already doing the work of a data steward.

So while a technology investment is required and you need people who can use the technology, start by formalizing the data stewardship work people are doing already as part of their current job. This way you have people who understand the data, taking an active role in the management of the data and they even get excited about it because their work is being recognized. IT takes on the role of enabling these people instead of having responsibility for all things data.

Q. Can you share examples of how immature information governance is a serious impediment to healthcare payers and providers?
Cost of Bad DataSure, without information governance, data is not harmonized across sources and so it is hard to make sense of it. This isn’t a problem when you are one business unit or one department, but when you want to get a comprehensive view or a view that incorporates external sources of information, this approach falls apart.

For example, let’s say the cardiology department in a healthcare organization implements a dashboard. The dashboard looks impressive. Then a group of physicians sees the dashboard, point out erroes and ask where the information (i.e. diagnosis or attending physician) came from. If you can’t answer these questions, trace the data back to its sources, or if you have data inconsistencies, the dashboard loses credibility. This is an example of how analytics fail to gain adoption and fail to foster innovation.

Q. Can you share examples of what data-driven healthcare organizations are doing differently?
Certainly, while many are just getting started on their journey to becoming data-driven, I’m seeing some inspiring  examples, including:

  • Implementing data governance for healthcare analytics. The program and data is owned by the business and enabled by IT and supported by technology such as data integration, data quality and MDM.
  • Connecting information from across the entire healthcare ecosystem including 3rd party sources like payers, state agencies, and reference data like credit information from Equifax, firmographics from Dun & Bradstreet or NPI numbers from the national provider registry.
  • Establishing consistent data definitions and parameters
  • Thinking about the internet of things (IoT) and how to incorporate device data into analysis
  • Engaging patients through non-traditional channels including loyalty programs and social media; tracking this information in a customer relationship management (CRM) system
  • Fostering collaboration by understanding the relationships between patients, providers and the rest of the ecosystem
  • Analyzing data to understand what is working and what is not working so  that they can drive out unwanted variations in care

Q. What advice can you give healthcare provider and payer employees who want access to high quality healthcare data?
As with other organizational assets that deliver value—like buildings and equipment—data requires a foundational investment in people and systems to maximize return. In other words, institutions and individuals must start managing their mission-critical data with the same rigor they manage other mission-critical enterprise assets.

Q. Anything else you want to add?
Yes, I wanted to thank our 14 visionary customer executives at data-driven healthcare organizations such as MD Anderson, UPMC, Quest Diagnostics, Sutter Health, St. Joseph Health, Dallas Children’s Medical Center and Navinet for taking time out of their busy schedules to share their journeys toward becoming data-driven at Informatica World 2014.  In our next post, I’ll share some highlights about how they are using data, how they are ensuring it is clean, safe and connected and a few data management best practices. InformaticaWorld attendees will be able to download presentations starting today! If you missed InformaticaWorld 2014, stay tuned for our upcoming webinars featuring many of these examples.

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Customers, Data Governance, Data Integration, Data Quality, Enterprise Data Management, Healthcare, Informatica World 2014, Master Data Management, Vertical | Tagged , , , , , , , , , , , , , , | Leave a comment

Agile Data Integration in Action: PowerCenter 9.6 Demo

PowerCenter 9.6 Demo WebinarA Data Integration Developer, a Data Analyst and a Business Analyst go into a bar… Heard that one? You probably didn’t. They never made it to the bar. They are still back at the office, going back and forth for the umpteenth time on the data requirements for the latest report…

Sounds familiar? If so, you are not alone. Many IT departments are struggling to meet the data needs of their business counterparts. Spreadsheets, emails and cocktail napkins have not proven themselves an effective tool for relaying data requirement by the business. The process takes too long and leaves both sides frustrated and dissatisfied with the outcome. IT does not have the bandwidth to meet the ever-increasing and rapidly changing data needs of the business.

The old-fashioned “waterfall” approach to data integration simply won’t cut it anymore in the fast-paced data-driven world. There has to be a better way. Here at Informatica, we believe that an end-to-end Agile Data Integration process can greatly increase business agility.

We start with a highly collaborative process, whereby IT and the Analyst work closely together through an iterative process to define data integration requirements. IT empowers the analyst with self-service tools that enable rapid prototyping and data profiling. Once the analyst is happy with the data they access and combine, they can use their tool to seamlessly share the output with IT for final deployment. This approach greatly reduces the time-to-data, and not just any data, the right data!

The ability to rapidly generate reports and deliver new critical data for decision-making is foundational to business agility. Another important aspect of business agility is the ability to scale your system as your needs grow to support more data, data types, users and projects. We accomplish that through advanced scaling capabilities, such as grid support and high availability, leading to zero downtime, as well as improved data insights through metadata management, lineage, impact analysis and business glossary.

Finally, we need to continue to ensure agility when our system is in production. Data validation should be performed to eliminate data defects. Trying to manually validate data is like looking for a needle in a haystack, very slowly… Automating your data validation process is fast and reliable, ensuring that the business gets accurate data all the time.

It is just as important to become more proactive and less reactive when it comes to your data in production. Early detection of data process and workflow problems through proactive monitoring is key to prevention.

Would you like to see a 5X increase in the speed of delivering data integration projects?

Would you like to provide the system reliability you need as your business grows, and ensure that your business continues to get the critical data it requires without defects and without interruption?

To learn more about how Agile Data Integration can enable business agility, please check out the demonstration of the newly-released PowerCenter 9.6, featuring David Lyle, VP Product Strategy at Informatica and the Informatica Product Desk experts. This demo webinar is available on demand.

Deep Dive Demo: Informatica PowerCenter 9.6.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Business/IT Collaboration, Data Integration | Tagged , , , , , | Leave a comment

Business Beware! Corporate IT Is “Fixing” YOUR Data

It is troublesome to me to repeatedly get into conversations with IT managers who want to fix data “for the sake of fixing it”.  While this is presumably increasingly rare, due to my department’s role, we probably see a higher occurrence than the normal software vendor employee.  Given that, please excuse the inflammatory title of this post.

Nevertheless, once the deal is done, we find increasingly fewer of these instances, yet still enough, as the average implementation consultant or developer cares about this aspect even less.  A few months ago a petrochemical firm’s G&G IT team lead told me that he does not believe that data quality improvements can or should be measured.  He also said, “if we need another application, we buy it.  End of story.”  Good for software vendors, I thought, but in most organizations $1M here or there do not lay around leisurely plus decision makers want to see the – dare I say it – ROI.

This is not what a business - IT relationship should feel like

This is not what a business – IT relationship should feel like

However, IT and business leaders should take note that a misalignment due to lack OR disregard of communication is a critical success factor.  If the business does not get what it needs and wants AND it differs what Corporate IT is envisioning and working on – and this is what I am talking about here – it makes any IT investment a risky proposition.

Let me illustrate this with 4 recent examples I ran into:

1. Potential for flawed prioritization

A retail customer’s IT department apparently knew that fixing and enriching a customer loyalty record across the enterprise is a good and financially rewarding idea.  They only wanted to understand what the less-risky functional implementation choices where. They indicated that if they wanted to learn what the factual financial impact of “fixing” certain records or attributes, they would just have to look into their enterprise data warehouse.  This is where the logic falls apart as the warehouse would be just as unreliable as the “compromised” applications (POS, mktg, ERP) feeding it.

Even if they massaged the data before it hit the next EDW load, there is nothing inherently real-time about this as all OLTP are running processes of incorrect (no bidirectional linkage) and stale data (since the last load).

I would question if the business is now completely aligned with what IT is continuously correcting. After all, IT may go for the “easy or obvious” fixes via a weekly or monthly recurring data scrub exercise without truly knowing, which the “biggest bang for the buck” is or what the other affected business use cases are, they may not even be aware of yet.  Imagine the productivity impact of all the roundtripping and delay in reporting this creates.  This example also reminds me of a telco client, I encountered during my tenure at another tech firm, which fed their customer master from their EDW and now just found out that this pattern is doomed to fail due to data staleness and performance.

2. Fix IT issues and business benefits will trickle down

Client number two is a large North American construction Company.  An architect built a business case for fixing a variety of data buckets in the organization (CRM, Brand Management, Partner Onboarding, Mobility Services, Quotation & Requisitions, BI & EPM).

Grand vision documents existed and linked to the case, which stated how data would get better (like a sick patient) but there was no mention of hard facts of how each of the use cases would deliver on this.  After I gave him some major counseling what to look out and how to flesh it out – radio silence. Someone got scared of the math, I guess.

3. Now that we bought it, where do we start

The third culprit was a large petrochemical firm, which apparently sat on some excess funds and thought (rightfully so) it was a good idea to fix their well attributes. More power to them.  However, the IT team is now in a dreadful position having to justify to their boss and ultimately the E&P division head why they prioritized this effort so highly and spent the money.  Well, they had their heart in the right place but are a tad late.   Still, I consider this better late than never.

4. A senior moment

The last example comes from a South American communications provider. They seemingly did everything right given the results they achieved to date.  This gets to show that misalignment of IT and business does not necessarily wreak havoc – at least initially.

However, they are now in phase 3 of their roll out and reality caught up with them.  A senior moment or lapse in judgment maybe? Whatever it was; once they fixed their CRM, network and billing application data, they had to start talking to the business and financial analysts as complaints and questions started to trickle in. Once again, better late than never.

So what is the take-away from these stories. Why wait until phase 3, why have to be forced to cram some justification after the purchase?  You pick, which one works best for you to fix this age-old issue.  But please heed Sohaib’s words of wisdom recently broadcast on CNN Money “IT is a mature sector post bubble…..now it needs to deliver the goods”.  And here is an action item for you – check out the new way for the business user to prepare their own data (30 minutes into the video!).  Agreed?

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customer Services, Data Aggregation, Data Governance, Data Integration, Data Quality, Data Warehousing, Enterprise Data Management, Master Data Management | Leave a comment