Category Archives: Business/IT Collaboration
A mid-sized insurer recently approached our team for help. They wanted to understand how they fell short in making their case to their executives. Specifically, they proposed that fixing their customer data was key to supporting the executive team’s highly aggressive 3-year growth plan. (This plan was 3x today’s revenue). Given this core organizational mission – aside from being a warm and fuzzy place to work supporting its local community – the slam dunk solution to help here is simple. Just reducing the data migration effort around the next acquisition or avoiding the ritual annual, one-off data clean-up project already pays for any tool set enhancing data acquisitions, integration and hygiene. Will it get you to 3x today’s revenue? It probably won’t. What will help are the following:
Hard cost avoidance via software maintenance or consulting elimination is the easy part of the exercise. That is why CFOs love it and focus so much on it. It is easy to grasp and immediate (aka next quarter).
Soft cost reduction, like staff redundancies are a bit harder. Despite them being viable, in my experience very few decision makers want work on a business case to lay off staff. My team had one so far. They look at these savings as freed up capacity, which can be re-deployed more productively. Productivity is also a bit harder to quantify as you typically have to understand how data travels and gets worked on between departments.
However, revenue effects are even harder and esoteric to many people as they include projections. They are often considered “soft” benefits, although they outweigh the other areas by 2-3 times in terms of impact. Ultimately, every organization runs their strategy based on projections (see the insurer in my first paragraph).
The hardest to quantify is risk. Not only is it based on projections – often from a third party (Moody’s, TransUnion, etc.) – but few people understand it. More often, clients don’t even accept you investigating this area if you don’t have an advanced degree in insurance math. Nevertheless, risk can generate extra “soft” cost avoidance (beefing up reserve account balance creating opportunity cost) but also revenue (realizing a risk premium previously ignored). Often risk profiles change due to relationships, which can be links to new “horizontal” information (transactional attributes) or vertical (hierarchical) from parent-child relationships of an entity and the parent’s or children’s transactions.
Given the above, my initial advice to the insurer would be to look at the heartache of their last acquisition, use a benchmark for IT productivity from improved data management capabilities (typically 20-26% – Yankee Group) and there you go. This is just the IT side so consider increasing the upper range by 1.4x (Harvard Business School) as every attribute change (last mobile view date) requires additional meetings on a manager, director and VP level. These people’s time gets increasingly more expensive. You could also use Aberdeen’s benchmark of 13hrs per average master data attribute fix instead.
You can also look at productivity areas, which are typically overly measured. Let’s assume a call center rep spends 20% of the average call time of 12 minutes (depending on the call type – account or bill inquiry, dispute, etc.) understanding
- Who the customer is
- What he bought online and in-store
- If he tried to resolve his issue on the website or store
- How he uses equipment
- What he cares about
- If he prefers call backs, SMS or email confirmations
- His response rate to offers
- His/her value to the company
If he spends these 20% of every call stringing together insights from five applications and twelve screens instead of one frame in seconds, which is the same information in every application he touches, you just freed up 20% worth of his hourly compensation.
Then look at the software, hardware, maintenance and ongoing management of the likely customer record sources (pick the worst and best quality one based on your current understanding), which will end up in a centrally governed instance. Per DAMA, every duplicate record will cost you between $0.45 (party) and $0.85 (product) per transaction (edit touch). At the very least each record will be touched once a year (likely 3-5 times), so multiply your duplicated record count by that and you have your savings from just de-duplication. You can also use Aberdeen’s benchmark of 71 serious errors per 1,000 records, meaning the chance of transactional failure and required effort (% of one or more FTE’s daily workday) to fix is high. If this does not work for you, run a data profile with one of the many tools out there.
If standardization of records (zip codes, billing codes, currency, etc.) is the problem, ask your business partner how many customer contacts (calls, mailing, emails, orders, invoices or account statements) fail outright and/or require validation because of these attributes. Once again, if you apply the productivity gains mentioned earlier, there are you savings. If you look at the number of orders that get delayed in form of payment or revenue recognition and the average order amount by a week or a month, you were just able to quantify how much profit (multiply by operating margin) you would be able to pull into the current financial year from the next one.
The same is true for speeding up the introduction or a new product or a change to it generating profits earlier. Note that looking at the time value of funds realized earlier is too small in most instances especially in the current interest environment.
If emails bounce back or snail mail gets returned (no such address, no such name at this address, no such domain, no such user at this domain), e(mail) verification tools can help reduce the bounces. If every mail piece (forget email due to the miniscule cost) costs $1.25 – and this will vary by type of mailing (catalog, promotion post card, statement letter), incorrect or incomplete records are wasted cost. If you can, use fully loaded print cost incl. 3rd party data prep and returns handling. You will never capture all cost inputs but take a conservative stab.
If it was an offer, reduced bounces should also improve your response rate (also true for email now). Prospect mail response rates are typically around 1.2% (Direct Marketing Association), whereas phone response rates are around 8.2%. If you know that your current response rate is half that (for argument sake) and you send out 100,000 emails of which 1.3% (Silverpop) have customer data issues, then fixing 81-93% of them (our experience) will drop the bounce rate to under 0.3% meaning more emails will arrive/be relevant. This in turn multiplied by a standard conversion rate (MarketingSherpa) of 3% (industry and channel specific) and average order (your data) multiplied by operating margin gets you a benefit value for revenue.
If product data and inventory carrying cost or supplier spend are your issue, find out how many supplier shipments you receive every month, the average cost of a part (or cost range), apply the Aberdeen master data failure rate (71 in 1,000) to use cases around lack of or incorrect supersession or alternate part data, to assess the value of a single shipment’s overspend. You can also just use the ending inventory amount from the 10-k report and apply 3-10% improvement (Aberdeen) in a top-down approach. Alternatively, apply 3.2-4.9% to your annual supplier spend (KPMG).
You could also investigate the expediting or return cost of shipments in a period due to incorrectly aggregated customer forecasts, wrong or incomplete product information or wrong shipment instructions in a product or location profile. Apply Aberdeen’s 5% improvement rate and there you go.
Consider that a North American utility told us that just fixing their 200 Tier1 suppliers’ product information achieved an increase in discounts from $14 to $120 million. They also found that fixing one basic out of sixty attributes in one part category saves them over $200,000 annually.
So what ROI percentages would you find tolerable or justifiable for, say an EDW project, a CRM project, a new claims system, etc.? What would the annual savings or new revenue be that you were comfortable with? What was the craziest improvement you have seen coming to fruition, which nobody expected?
Next time, I will add some more “use cases” to the list and look at some philosophical implications of averages.
When you talk to CIOs today, you strongly get the feeling that that the CIO role is about to change. One CIO said to me that the CIO is the midst of “a sea state change”. Recently, I got to talk with a half a dozen CIOs on what is most important to their role and how they see the role as a whole changing over the next few years. Their answers were thought provoking and worthy of broader discussion.
CIOs say it is becoming less and less common for the CIO to come up through the technical ranks. One CIO said it used to common for one to become a CIO after being a CTO but this has changed. More and more business people are becoming CIOs. The CIO role today is “more about understanding the business than to understanding technology. It is more about business alignment than technology alignment”. This need for better business alignment led one CIO to say that consulting is a great starting point for a future IT leader. Consulting provides a future IT leader with the following: 1) vertical expertise; 2) technical expertise; and 3) systems integration expertise. Another CIO suggested that the CIO role sometimes is being used these days as a rotational position for a future business leader. “It provides these leaders with technical skills that they will need in their career.” Regardless, it is increasingly clear that business expertise versus technical expertise is much more important.
How will the CIO role change?
CIOs, in general, believe that their role will change in the next five years. One CIO insisted that CIOs are going to continue to be incredibly important to their enterprises. However, he said that CIOs have the opportunity to create analytics that guide the business in finding value. For CIOs to do this, they need to connect the dots between transactional systems, BI, and the planning systems. They need to convert data into action. This means they need to enable the business to be proactive and cut the time it takes for them to execute. CIOs need in his view to enable their enterprises to generate differentiated value than competitors.
Another CIO sees the CIOs becoming the orchestrator vs. the builder of business services. This CIO said that “building stuff is now really table stakes”. Cloud and loosely oriented partnerships is bringing vendor management to the forefront. Agreeing with this point of view, a third CIO says that she sees CIOs moving from an IT role into a business role. She went onto say that “CIOs need to understand the business better and be able to partner better with the business. They need to understand the role for IT better and this includes understanding their firm’s business models better”.
A final CIO suggests something even more radical. He believes that the CIO role will disappear altogether or morph into something new. This CIO claims CIOs have the opportunity to become the chief digital officer or the COO. After all, the CIO is about implementing business processes.
For more technical CIOs, this CIO sees them reverting into CTOs but he worries at the same time about the importance of hardware and platform issues with the increasing importance of cloud—this type of role is going to become less and less relevant. This same CIO says that, in passing, CIOs screwed up a golden opportunity 10 years ago. At this time, CIOs one by one clawed their way to the table and separated themselves from the CFO. However, once they were at the table, they did not change their game. They continued to talk bits and bytes versus business issues. And one by one, they are being returned to the CFO to manage.
So change is inevitable. CIOs need to change their game or be changed by external forces. So let’s start the debate right now. How do you see the CIO role changing? Express your opinion. Let’s see where you and the above CIOs agree and more importantly where you differ?
“Inaccurate, inconsistent and disconnected supplier information prohibits us from doing accurate supplier spend analysis, leveraging discounts, comparing and choosing the best prices, and enforcing corporate standards.”
This is quotation from a manufacturing company executive. It illustrates the negative impact that poorly managed supplier information can have on a company’s ability to cut costs and achieve revenue targets.
Many supply chain and procurement teams at large companies struggle to see the total relationship they have with suppliers across product lines, business units and regions. Why? Supplier information is scattered across dozens or hundreds of Enterprise Resource Planning (ERP) and Accounts Payable (AP) applications. Too much valuable time is spent manually reconciling inaccurate, inconsistent and disconnected supplier information in an effort to see the big picture. All this manual effort results in back office administrative costs that are higher than they should be.
Do these quotations from supply chain leaders and their teams sound familiar?
“We have 500,000 suppliers. 15-20% of our supplier records are duplicates. 5% are inaccurate.”
“I get 100 e-mails a day questioning which supplier to use.”
“To consolidate vendor reporting for a single supplier between divisions is really just a guess.”
“Every year 1099 tax mailings get returned to us because of invalid addresses, and we play a lot of Schedule B fines to the IRS.”
“Two years ago we spent a significant amount of time and money cleansing supplier data. Now we are back where we started.”
Please join me and Naveen Sharma, Director of the Master Data Management (MDM) Practice at Cognizant for a Webinar, Supercharge Your Supply Chain Applications with Better Supplier Information, on Tuesday, July 29th at 11 am PT.
During the Webinar, we’ll explain how better managing supplier information can help you achieve the following goals:
- Accelerate supplier onboarding
- Mitiate the risk of supply disruption
- Better manage supplier performance
- Streamline billing and payment processes
- Improve supplier relationship management and collaboration
- Make it easier to evaluate non-compliance with Service Level Agreements (SLAs)
- Decrease costs by negotiating favorable payment terms and SLAs
I hope you can join us for this upcoming Webinar!
The interesting thing is that many of the upstarts do not even intend to take on the market leader in the segment. Christensen cites the classic example of Digital Equipment Corporation in the 1980s, which was unable to make the transition from large, expensive enterprise systems to smaller, PC-based equipment. The PC upstarts in this case did not take on Digital directly – rather they addressed unmet needs in another part of the market.
Christensen wrote and published The Innovator’s Dilemma more than 17 years ago, but his message keeps reverberating across the business world. Lately, Jill Lapore questioned some of thinking that has evolved around disruptive innovation in a recent New Yorker article. “Disruptive innovation is a theory about why businesses fail. It’s not more than that. It doesn’t explain change. It’s not a law of nature,” she writes. Christensen responded with a rebuttal to Lapore’s thesis, noting that “disruption doesn’t happen overnight,” and that “[Disruptive innovation] is not a theory about survivability.”
There is something Lapore points out that both she and Christensen can agree on: “disruption” is being oversold and misinterpreted on a wide scale these days. Every new product that rolls out is now branded as “disruptive.” As stated above, the true essence of disruption is creating new markets where the leaders would not tread.
Data itself can potentially be a source of disruption, as data analytics and information emerge as strategic business assets. While the ability to provide data analysis at real-time speeds, or make new insights possible isn’t disruption in the Christensen sense, we are seeing the rise of new business models built around data and information that could bring new leaders to the forefront. Data analytics can either play a role in supporting this movement, or data itself may be the new product or service disrupting existing markets.
We’ve already been seeing this disruption taking place within the publishing industry, for example – companies or sites providing real-time or near real-time services such as financial updates, weather forecasts and classified advertising have displaced traditional newspapers and other media as information sources.
Employing data analytics as a tool for insights never before available within an industry sector also may be part of disruptive innovation. Tesla Motors, for example, is disruptive to the automotive industry because it manufactures entirely electric cars. But the formula to its success is its employment of massive amounts of data from its array of vehicle in-devices to assure quality and efficiency.
Likewise, data-driven disruption may be occurring in places that may have been difficult to innovate. For example, it’s long been speculated that some of the digital giants, particularly Google, are poised to enter the long-staid insurance industry. If this were to happen, Google would not enter as a typical insurance company with a new web-based spin. Rather, the company would be employing new techniques of data gathering, insight and analysis to offer an entirely new model to consumers – one based on data. As Christopher Hernaes recently related in TechCrunch, Google’s ability to collect and mine data on homes, business and autos give it a unique value proposition n the industry’s value chain.
We’re in an era in which Christensen’s mode of disruptive innovation has become a way of life. Increasingly, it appears that enterprises that are adept and recognizing and acting upon the strategic potential of data may be joining the ranks of the disruptors.
After I graduated from business school, I started reading Fortune Magazine. I guess that I became a regular reader because each issue largely consists of a set of mini-business cases. And over the years, I have even started to read the witty remarks from the managing editor, Andy Serwer. However, this issue’s comments were even more evocative than usual.
Connectivity is perhaps the biggest opportunity of our time
Andy wrote, “Connectivity is perhaps the biggest opportunity of our time. As technology makes the world smaller, it is clear that the countries and companies that connect the best—either in terms of, say traditional infrastructure or through digital networks are in the drivers’ seat”. Andy sees differentiated connectivity as involving two elements–access and content. This is important to note because Andy believes the biggest winners going forward are going to be the best connectors to each.
Enterprises need to evaluate how the collect, refine, and make useful data
But how do enterprises establish world class connectivity to content? I would argue–whether you are talking about large or small data—it comes from improving an enterprise’s abiity collect, refine, and create useful data. In recent CFO research, the importance of enterprise data gathering capabilities was stressed. CFOs said that their enterprises need to “get data right” at the same time as they confirmed that their enterprises in fact have a data issue. The CFOs said that they are worried about the integrity of data from the source forward. And once they manually create clean data, they worry about making this data useful to their enterprises. Why does this data matter so much to the CFO? Because as CFOs get more strategic, they are trying to make sure their firms drive synergies across their businesses.
Business need to make sense of data and get it to business users faster
One CFO said it this way, “data is potentially the only competitive advantage left”. Yet another said, “our businesses needs to make better decisions from data. We need to make sense of data faster.” At the same time leading edge thinkers like Geoffrey Moore has been suggesting that businesses need to move from “systems of record” applications to “system of engagement” applications. This notion suggests the importance of providing more digestible apps, but also the importance of recognizing that the most important apps for business users will provide relevant information for decision making. Put another way, data is clearly becoming fuel to the enterprise decision making.
“Data Fueled Apps” will provide a connectivity advantage
For this reason, “data fueled” apps will be increasingly important to the business. Decision makers these days want to practice “management by walking around” to quote Tom Peter’s Book, “In Search of Excellence”. And this means having critical, fresh data at their fingertips for each and every meeting. And clearly, organizations that provide this type of data connectivity will establish the connectivity advantage that Serwer suggested in his editor comments. This of course applies to consumer facing apps as well. Server, also, comments on the impacts of Apple and Facebook. Most consumers today are far better informed before they make a purchase. The customer facing apps, for example Amazon, that have led the way have provided the relevant information for the consumer to inform them on their purchase journey.
Delivering “Data Fueled Apps” to the Enterprise
But how do you create the enterprise wide connectivity to power the “Data Fueled Apps?” It is clear from the CFOs comments work is needed here. That work involves creating data which is systematically clean, safe, and connected. Why does this data need to be clean? The CFOs we talked to said that when the data is not clean then they have to manually massage the data and then move from system to system. This is not providing the kind of system of engagement envisioned by Geoffrey Moore. What this CFO wants to move to a world where he can access the numbers easily, timely, and accurately”.
Data, also, needs to be safe. This means that only people with access should be able to see data whether we are talking about transactional or analytical data. This may sound obvious, but very few isolate and secure data as it moves from system to system. And lastly, data needs to be connected. Yet another CFO said, “the integration of the right systems to provide the right information needs to be done so we have the right information to manage and make decisions at the right time”. He continued by saying “we really care about technology integration and getting it less manual. It means that we can inspect the books half way through the cycle. And getting less manual means we can close the books even faster. However, if systems don’t talk (connect) to one another, it is a big issue”.
Finally, whether we are discussing big data or small data, we need to make sure the data collected is more relevant and easier to consume. What is needed here is a data intelligence layer provides easy ways to locate useful data and recommend or guide ways to improve the data. This way analysts and leaders can spend less time on searching or preparing data and more time on analyzing the data to connect the business dots. This can involve mapping data relationship across all applications and being able to draw inferences from data to drive real time responses.
So in this new connected world, we need to first set up a data infrastructure to continuously make data clean, safe, and connected regardless of use case. It might not be needed to collect data, but the data infrastructure may be needed to define the connectivity (in the shape of access and content). We also need to make sure that the infrastructure for doing this is reusable so that the time from concept to new data fueled app is minimized. And then to drive informational meaning, we need to layer on top the intelligence. With this, we can deliver “data fueled apps” that enable business users the access and content to drive better business differentiation and decisioning!
Before I joined Informatica I worked for a health plan in Boston. I managed several programs including CMS Five Start Quality Rating System and Risk Adjustment Redesign. We recognized the need for a robust diagnostic profile of our members in support of risk adjustment. However, because the information resides in multiple sources, gathering and connecting the data presented many challenges. I see the opportunity for health plans to transform risk adjustment.
As risk adjustment becomes an integral component in healthcare, I encourage health plans to create a core competency around the development of diagnostic profiles. This should be the case for health plans and ACO’s. This profile is the source of reimbursement for an individual. This profile is also the basis for clinical care management. Augmented with social and demographic data, the profile can create a roadmap for successfully engaging each member.
Why is risk adjustment important?
Risk Adjustment is increasingly entrenched in the healthcare ecosystem. Originating in Medicare Advantage, it is now applicable to other areas. Risk adjustment is mission critical to protect financial viability and identify a clinical baseline for members.
What are a few examples of the increasing importance of risk adjustment?
1) Centers for Medicare and Medicaid (CMS) continues to increase the focus on Risk Adjustment. They are evaluating the value provided to the Federal government and beneficiaries. CMS has questioned the efficacy of home assessments and challenged health plans to provide a value statement beyond the harvesting of diagnoses codes which result solely in revenue enhancement. Illustrating additional value has been a challenge. Integrating data across the health plan will help address this challenge and derive value.
2) Marketplace members will also require risk adjustment calculations. After the first three years, the three “R’s” will dwindle down to one ‘R”. When Reinsurance and Risk Corridors end, we will be left with Risk Adjustment. To succeed with this new population, health plans need a clear strategy to obtain, analyze and process data. CMS processing delays make risk adjustment even more difficult. A Health Plan’s ability to manage this information will be critical to success.
3) Dual Eligibles, Medicaid members and ACO’s also rely on risk management for profitability and improved quality.
With an enhanced diagnostic profile — one that is accurate, complete and shared — I believe it is possible to enhance care, deliver appropriate reimbursements and provide coordinated care.
How can payers better enable risk adjustment?
- Facilitate timely analysis of accurate data from a variety of sources, in any format.
- Integrate and reconcile data from initial receipt through adjudication and submission.
- Deliver clean and normalized data to business users.
- Provide an aggregated view of master data about members, providers and the relationships between them to reveal insights and enable a differentiated level of service.
- Apply natural language processing to capture insights otherwise trapped in text based notes.
With clean, safe and connected data, health plans can profile members and identify undocumented diagnoses. With this data, health plans will also be able to create reports identifying providers who would benefit from additional training and support (about coding accuracy and completeness).
What will clean, safe and connected data allow?
- Allow risk adjustment to become a core competency and source of differentiation. Revenue impacts are expanding to lines of business representing larger and increasingly complex populations.
- Educate, motivate and engage providers with accurate reporting. Obtaining and acting on diagnostic data is best done when the member/patient is meeting with the caregiver. Clear and trusted feedback to physicians will contribute to a strong partnership.
- Improve patient care, reduce medical cost, increase quality ratings and engage members.
According to a recent article in the LA Times, healthcare costs in the United States far exceed costs in other countries. For example, heart bypass surgery costs an average of $75,345 in the U.S. compared to $15,742 in the Netherlands and $16,492 in Argentina. In the U.S. healthcare accounts for 18% of the U.S. GDP and is increasing.
Michelle Blackmer is an healthcare industry expert at Informatica. In this interview, she explains why business as usual isn’t good enough anymore. Healthcare organizations are rethinking how they do business in an effort to improve outcomes, reduce costs, and comply with regulatory pressures such as the Affordable Care Act (ACA). Michelle believes a data-driven healthcare culture is foundational to personalized medicine and discusses the importance of clean, safe and connected data in executing a successful transformation.
Q. How is the healthcare industry responding to the rising costs of healthcare?
In response to the rising costs of healthcare, regulatory pressures (i.e. Affordable Care Act (ACA)), and the need to better patient outcomes at lower costs, the U.S. healthcare industry is transforming from a volume-based to a value-based model. In this new model, healthcare organizations need to invest in delivering personalized medicine.
To appreciate the potential of personalized medicine, think about your own healthcare experience. It’s typically reactive. You get sick, you go to the doctor, the doctor issues a prescription and you wait a couple of days to see if that drug works. If it doesn’t, you call the doctor and she tries another drug. This process is tedious, painful and costly.
Now imagine if you had a chronic disease like depression or cancer. On average, any given prescription drug only works for half of those who take it. Among cancer patients, the rate of ineffectiveness jumps to 75 percent. Anti-depressants are effective in only 62 percent of those who take them.
Organizations like MD Anderson and UPMC aim to put an end to cancer. They are combining scientific research with access to clean, safe and connected data (data of all types including genomic data). The insights revealed will empower personalized chemotherapies. Personalized medicine offers customized treatments based on patient history and best practices. Personalized medicine will transform healthcare delivery. Click on the links to watch videos about their transformational work.
Q. What role does data play in enabling personalized medicine?
Data is foundational to value-based care and personalized medicine. Not just any data will do. It needs to be clean, safe and connected data. It needs to be delivered rapidly across hallways and across networks.
As an industry, healthcare is at a stage where meaningful electronic data is being generated. Now you need to ensure that the data is accessible and trustworthy so that it can be rapidly analyzed. As data is aggregated across the ecosystem, married with financial and genomic data, data quality issues become more obvious. It’s vital that you can define the data issues so the people can spend their time analyzing the data to gain insights instead of wading through and manually resolving data quality issues.
The ability to trust data will differentiate leaders from the followers. Leaders will advance personalized medicine because they rely on clean, safe and connected data to:
1) Practice analytics as a core competency
2) Define evidence, deliver best practice care and personalize medicine
3) Engage patients and collaborate to foster strong, actionable relationships
Take a look at this Healthcare eBook for more on this topic: Potential Unlocked: Transforming Healthcare by Putting Information to Work.
Q. What is holding healthcare organizations back from managing their healthcare data like other mission-critical assets?
When you say other mission-critical assets, I think of facilitates, equipment, etc. Each of these assets has people and money assigned to manage and maintain them. The healthcare organizations I talk to who are highly invested in personalized medicine recognize that data is mission-critical. They are investing in the people, processes and technology needed to ensure data is clean, safe and connected. The technology includes data integration, data quality and master data management (MDM).
What’s holding other healthcare organizations back is that while they realize they need data governance, they wrongly believe they need to hire big teams of “data stewards” to be successful. In reality, you don’t need to hire a big team. Use the people you already have doing data governance. You may not have made this a formal part of their job description and they might not have data governance technologies yet, but they do have the skillset and they are already doing the work of a data steward.
So while a technology investment is required and you need people who can use the technology, start by formalizing the data stewardship work people are doing already as part of their current job. This way you have people who understand the data, taking an active role in the management of the data and they even get excited about it because their work is being recognized. IT takes on the role of enabling these people instead of having responsibility for all things data.
Q. Can you share examples of how immature information governance is a serious impediment to healthcare payers and providers?
Sure, without information governance, data is not harmonized across sources and so it is hard to make sense of it. This isn’t a problem when you are one business unit or one department, but when you want to get a comprehensive view or a view that incorporates external sources of information, this approach falls apart.
For example, let’s say the cardiology department in a healthcare organization implements a dashboard. The dashboard looks impressive. Then a group of physicians sees the dashboard, point out erroes and ask where the information (i.e. diagnosis or attending physician) came from. If you can’t answer these questions, trace the data back to its sources, or if you have data inconsistencies, the dashboard loses credibility. This is an example of how analytics fail to gain adoption and fail to foster innovation.
Q. Can you share examples of what data-driven healthcare organizations are doing differently?
Certainly, while many are just getting started on their journey to becoming data-driven, I’m seeing some inspiring examples, including:
- Implementing data governance for healthcare analytics. The program and data is owned by the business and enabled by IT and supported by technology such as data integration, data quality and MDM.
- Connecting information from across the entire healthcare ecosystem including 3rd party sources like payers, state agencies, and reference data like credit information from Equifax, firmographics from Dun & Bradstreet or NPI numbers from the national provider registry.
- Establishing consistent data definitions and parameters
- Thinking about the internet of things (IoT) and how to incorporate device data into analysis
- Engaging patients through non-traditional channels including loyalty programs and social media; tracking this information in a customer relationship management (CRM) system
- Fostering collaboration by understanding the relationships between patients, providers and the rest of the ecosystem
- Analyzing data to understand what is working and what is not working so that they can drive out unwanted variations in care
Q. What advice can you give healthcare provider and payer employees who want access to high quality healthcare data?
As with other organizational assets that deliver value—like buildings and equipment—data requires a foundational investment in people and systems to maximize return. In other words, institutions and individuals must start managing their mission-critical data with the same rigor they manage other mission-critical enterprise assets.
Q. Anything else you want to add?
Yes, I wanted to thank our 14 visionary customer executives at data-driven healthcare organizations such as MD Anderson, UPMC, Quest Diagnostics, Sutter Health, St. Joseph Health, Dallas Children’s Medical Center and Navinet for taking time out of their busy schedules to share their journeys toward becoming data-driven at Informatica World 2014. In our next post, I’ll share some highlights about how they are using data, how they are ensuring it is clean, safe and connected and a few data management best practices. InformaticaWorld attendees will be able to download presentations starting today! If you missed InformaticaWorld 2014, stay tuned for our upcoming webinars featuring many of these examples.