Category Archives: Business/IT Collaboration

What’s In A Name?

Naming ConventionsSometimes, the choice of a name has unexpected consequences. Often these consequences aren’t fair. But they exist, nonetheless. For an example of this, consider the well-known study by the National Bureau of Economic Research study that compares the hiring prospects of candidates with identical resumes, but different names. During the study, titled a “Field Experiment on Labor Market Discrimination,” employers were found to be more likely to reply candidates with popular, traditionally Caucasian names than to candidates with either unique, eclectic names or with traditionally African-American names. Though these biases are clearly unfair to the candidates, they do illustrate a key point: One’s choice when naming something can come with perceptions that influence outcomes.

For an example from the IT world, consider my recent engagement at a regional retail bank. In this engagement, half of the meeting time was consumed by IT and business leaders debating how to label their Master Data Management (MDM) Initiative.  Consider these excerpts:

  • Should we even call it MDM? Answer: No. Why? Because nobody on the business side will understand what that means. Also, as we just implemented a Data Warehouse/Mart last year and we are in the middle of our new CRM roll-out, everybody in business and retail banking will assume their data is already mastered in both of these.  On a side note; telcos understand MDM as Mobile Device Management.
  • Should we call it “Enterprise Data Master’? Answer: No. Why? Because unless you roll out all data domains and all functionality (standardization, matching, governance, hierarchy management, etc.) to the whole enterprise, you cannot.  And doing so is a bad idea as it is with every IT project.  Boiling the ocean and going live with a big bang is high cost, high risk and given shifting organizational strategies and leadership, quick successes are needed to sustain the momentum.
  • Should we call it “Data Warehouse – Release 2”? Answer: No. Why? Because it is neither a data warehouse, nor a version 2 of one.  It is a backbone component required to manage a key organizational ingredient – data –in a way that it becomes useful to many use cases, processes, applications and people, not just analytics, although it is often the starting block.  Data warehouses have neither been conceived nor designed to facilitate data quality (they assume it is there already) nor are they designed for real time interactions.  Did anybody ask if ETL is “Pneumatic Tubes – Version 2”?
  • Should we call it “CRM Plus”? Answer: No. Why? Because it has never intended or designed to handle the transactional volume and attribution breadth of high volume use cases, which are driven by complex business processes. Also, if it were a CRM system, it would have a more intricate UI capability beyond comparatively simple data governance workflows and UIs.

Consider this; any data quality solution like MDM, makes any existing workflow or application better at what it does best: manage customer interactions, create orders, generate correct invoices, etc.  To quote a colleague “we are the BASF of software”.  Few people understand what a chemical looks like or does but it makes a plastic container sturdy, transparent, flexible and light.

I also explained hierarchy management in a similar way. Consider it the LinkedIn network of your company, which you can attach every interaction and transaction to.  I can see one view, people in my network see a different one and LinkedIn has probably the most comprehensive view but we are all looking at the same core data and structures ultimately.

So let’s call the “use” of your MDM “Mr. Clean”, aka Meister Proper, because it keeps everything clean.

While naming is definitely a critical point to consider given the expectations, fears and reservations that come with MDM and the underlying change management, it was hilarious to see how important it suddenly was.  However, it was puzzling to me (maybe a naïve perspective) why mostly recent IT hires had to categorize everything into new, unique functional boxes, while business and legacy IT people wanted to re-purpose existing boxes.  I guess, recent IT used their approach to showcase that they were familiar with new technologies and techniques, which was likely a reason for their employment.  Business leaders, often with the exception of highly accomplished and well regarded ones, as well as legacy IT leaders, needed to reassure continuity and no threat of disruption or change.  Moreover, they also needed to justify their prior software investments’ value proposition.

Aside from company financial performance and regulatory screw-ups, legions of careers will be decide if, how and how successful this initiative will be.

Naming a new car model for a 100,000 production run or a shampoo for worldwide sales could not face much more scrutiny.  Software vendors give their future releases internal names of cities like Atlanta or famous people like Socrates instead of descriptive terms like “Gamification User Interface Release” or “Unstructured Content Miner”. This may be a good avenue for banks and retailers to explore.  It would avoid the expectation pitfalls associated with names like “Customer Success Data Mart”, “Enterprise Data Factory”, “Data Aggregator” or “Central Property Repository”.  In reality, there will be many applications, which can claim bits and pieces of the same data, data volume or functionality.  Who will make the call on which one will be renamed or replaced to explain to the various consumers what happened to it and why.

You can surely name any customer facing app something more descriptive like “Payment Central” or “Customer Success Point” but the reason why you can do this is that the user will only have one or maybe two points to interface with the organization. Internal data consumers will interact many more repositories.  Similarly, I guess this is all the reason why I call my kids by their first name and strangers label them by their full name, “Junior”, “Butter Fingers” or “The Fast Runner”.

I would love to hear some other good reasons why naming conventions should be more scrutinized.  Maybe you have some guidance on what should and should not be done and the reasons for it?

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, CIO, Data Quality, Master Data Management | Tagged , , | Leave a comment

The King of Benchmarks Rules the Realm of Averages

A mid-sized insurer recently approached our team for help. They wanted to understand how they fell short in making their case to their executives. Specifically, they proposed that fixing their customer data was key to supporting the executive team’s highly aggressive 3-year growth plan. (This plan was 3x today’s revenue).  Given this core organizational mission – aside from being a warm and fuzzy place to work supporting its local community – the slam dunk solution to help here is simple.  Just reducing the data migration effort around the next acquisition or avoiding the ritual annual, one-off data clean-up project already pays for any tool set enhancing data acquisitions, integration and hygiene.  Will it get you to 3x today’s revenue?  It probably won’t.  What will help are the following:

The King of Benchmarks Rules the Realm of Averages

Making the Math Work (courtesy of Scott Adams)

Hard cost avoidance via software maintenance or consulting elimination is the easy part of the exercise. That is why CFOs love it and focus so much on it.  It is easy to grasp and immediate (aka next quarter).

Soft cost reduction, like staff redundancies are a bit harder.  Despite them being viable, in my experience very few decision makers want work on a business case to lay off staff.  My team had one so far. They look at these savings as freed up capacity, which can be re-deployed more productively.   Productivity is also a bit harder to quantify as you typically have to understand how data travels and gets worked on between departments.

However, revenue effects are even harder and esoteric to many people as they include projections.  They are often considered “soft” benefits, although they outweigh the other areas by 2-3 times in terms of impact.  Ultimately, every organization runs their strategy based on projections (see the insurer in my first paragraph).

The hardest to quantify is risk. Not only is it based on projections – often from a third party (Moody’s, TransUnion, etc.) – but few people understand it. More often, clients don’t even accept you investigating this area if you don’t have an advanced degree in insurance math. Nevertheless, risk can generate extra “soft” cost avoidance (beefing up reserve account balance creating opportunity cost) but also revenue (realizing a risk premium previously ignored).  Often risk profiles change due to relationships, which can be links to new “horizontal” information (transactional attributes) or vertical (hierarchical) from parent-child relationships of an entity and the parent’s or children’s transactions.

Given the above, my initial advice to the insurer would be to look at the heartache of their last acquisition, use a benchmark for IT productivity from improved data management capabilities (typically 20-26% – Yankee Group) and there you go.  This is just the IT side so consider increasing the upper range by 1.4x (Harvard Business School) as every attribute change (last mobile view date) requires additional meetings on a manager, director and VP level.  These people’s time gets increasingly more expensive.  You could also use Aberdeen’s benchmark of 13hrs per average master data attribute fix instead.

You can also look at productivity areas, which are typically overly measured.  Let’s assume a call center rep spends 20% of the average call time of 12 minutes (depending on the call type – account or bill inquiry, dispute, etc.) understanding

  • Who the customer is
  • What he bought online and in-store
  • If he tried to resolve his issue on the website or store
  • How he uses equipment
  • What he cares about
  • If he prefers call backs, SMS or email confirmations
  • His response rate to offers
  • His/her value to the company

If he spends these 20% of every call stringing together insights from five applications and twelve screens instead of one frame in seconds, which is the same information in every application he touches, you just freed up 20% worth of his hourly compensation.

Then look at the software, hardware, maintenance and ongoing management of the likely customer record sources (pick the worst and best quality one based on your current understanding), which will end up in a centrally governed instance.  Per DAMA, every duplicate record will cost you between $0.45 (party) and $0.85 (product) per transaction (edit touch).  At the very least each record will be touched once a year (likely 3-5 times), so multiply your duplicated record count by that and you have your savings from just de-duplication.  You can also use Aberdeen’s benchmark of 71 serious errors per 1,000 records, meaning the chance of transactional failure and required effort (% of one or more FTE’s daily workday) to fix is high.  If this does not work for you, run a data profile with one of the many tools out there.

If the sign says it - do it!

If the sign says it – do it!

If standardization of records (zip codes, billing codes, currency, etc.) is the problem, ask your business partner how many customer contacts (calls, mailing, emails, orders, invoices or account statements) fail outright and/or require validation because of these attributes.  Once again, if you apply the productivity gains mentioned earlier, there are you savings.  If you look at the number of orders that get delayed in form of payment or revenue recognition and the average order amount by a week or a month, you were just able to quantify how much profit (multiply by operating margin) you would be able to pull into the current financial year from the next one.

The same is true for speeding up the introduction or a new product or a change to it generating profits earlier.  Note that looking at the time value of funds realized earlier is too small in most instances especially in the current interest environment.

If emails bounce back or snail mail gets returned (no such address, no such name at this address, no such domain, no such user at this domain), e(mail) verification tools can help reduce the bounces. If every mail piece (forget email due to the miniscule cost) costs $1.25 – and this will vary by type of mailing (catalog, promotion post card, statement letter), incorrect or incomplete records are wasted cost.  If you can, use fully loaded print cost incl. 3rd party data prep and returns handling.  You will never capture all cost inputs but take a conservative stab.

If it was an offer, reduced bounces should also improve your response rate (also true for email now). Prospect mail response rates are typically around 1.2% (Direct Marketing Association), whereas phone response rates are around 8.2%.  If you know that your current response rate is half that (for argument sake) and you send out 100,000 emails of which 1.3% (Silverpop) have customer data issues, then fixing 81-93% of them (our experience) will drop the bounce rate to under 0.3% meaning more emails will arrive/be relevant. This in turn multiplied by a standard conversion rate (MarketingSherpa) of 3% (industry and channel specific) and average order (your data) multiplied by operating margin gets you a   benefit value for revenue.

If product data and inventory carrying cost or supplier spend are your issue, find out how many supplier shipments you receive every month, the average cost of a part (or cost range), apply the Aberdeen master data failure rate (71 in 1,000) to use cases around lack of or incorrect supersession or alternate part data, to assess the value of a single shipment’s overspend.  You can also just use the ending inventory amount from the 10-k report and apply 3-10% improvement (Aberdeen) in a top-down approach. Alternatively, apply 3.2-4.9% to your annual supplier spend (KPMG).

You could also investigate the expediting or return cost of shipments in a period due to incorrectly aggregated customer forecasts, wrong or incomplete product information or wrong shipment instructions in a product or location profile. Apply Aberdeen’s 5% improvement rate and there you go.

Consider that a North American utility told us that just fixing their 200 Tier1 suppliers’ product information achieved an increase in discounts from $14 to $120 million. They also found that fixing one basic out of sixty attributes in one part category saves them over $200,000 annually.

So what ROI percentages would you find tolerable or justifiable for, say an EDW project, a CRM project, a new claims system, etc.? What would the annual savings or new revenue be that you were comfortable with?  What was the craziest improvement you have seen coming to fruition, which nobody expected?

Next time, I will add some more “use cases” to the list and look at some philosophical implications of averages.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Migration, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , | Leave a comment

How Is The CIO Role Starting To Change?

When you talk to CIOs today, you strongly get the feeling that that the CIO role is about to change. One CIO said to me that the CIO is the midst of “a sea state change”. Recently, I got to talk with a half a dozen CIOs on what is most important to their role and how they see the role as a whole changing over the next few years. Their answers were thought provoking and worthy of broader discussion.

business alignmentCIOs need to be skilled at business alignment

CIOs say it is becoming less and less common for the CIO to come up through the technical ranks. One CIO said it used to common for one to become a CIO after being a CTO but this has changed. More and more business people are becoming CIOs. The CIO role today is “more about understanding the business than to understanding technology. It is more about business alignment than technology alignment”. This need for better business alignment led one CIO to say that consulting is a great starting point for a future IT leader. Consulting provides a future IT leader with the following: 1) vertical expertise; 2) technical expertise; and 3) systems integration expertise. Another CIO suggested that the CIO role sometimes is being used these days as a rotational position for a future business leader. “It provides these leaders with technical skills that they will need in their career.” Regardless, it is increasingly clear that business expertise versus technical expertise is much more important.

How will the CIO role change?

changeCIOs, in general, believe that their role will change in the next five years. One CIO insisted that CIOs are going to continue to be incredibly important to their enterprises. However, he said that CIOs have the opportunity to create analytics that guide the business in finding value. For CIOs to do this, they need to connect the dots between transactional systems, BI, and the planning systems. They need to convert data into action. This means they need to enable the business to be proactive and cut the time it takes for them to execute. CIOs need in his view to enable their enterprises to generate differentiated value than competitors.

Another CIO sees the CIOs becoming the orchestrator vs. the builder of business services. This CIO said that “building stuff is now really table stakes”. Cloud and loosely oriented partnerships is bringing vendor management to the forefront. Agreeing with this point of view, a third CIO says that she sees CIOs moving from an IT role into a business role. She went onto say that “CIOs need to understand the business better and be able to partner better with the business. They need to understand the role for IT better and this includes understanding their firm’s business models better”.

A final CIO suggests something even more radical.  He believes that the CIO role will disappear altogether or morph into something new. This CIO claims CIOs have the opportunity to become the chief digital officer or the COO. After all, the CIO is about implementing business processes.

clean dataFor more technical CIOs, this CIO sees them reverting into CTOs but he worries at the same time about the importance of hardware and platform issues with the increasing importance of cloud—this type of role is  going to become less and less relevant. This same CIO says that, in passing, CIOs screwed up a golden opportunity 10 years ago. At this time, CIOs one by one clawed their way to the table and separated themselves from the CFO. However, once they were at the table, they did not change their game. They continued to talk bits and bytes versus business issues. And one by one, they are being returned to the CFO to manage.

Parting Thoughts

So change is inevitable. CIOs need to change their game or be changed by external forces. So let’s start the debate right now. How do you see the CIO role changing? Express your opinion. Let’s see where you and the above CIOs agree and more importantly where you differ?

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, CIO | Tagged , , , | Leave a comment

Don’t Fire the CIO, Transform the Business

Transform the Business

Transform the Business

The headline on the Venture Beat website this weekend was Why you should fire your CIO. The point of the article was that the rest of the executive suite in most organizations is ignorant about IT issues and has abdicated responsibility to the CIO, or they build their own information solutions without sufficient competence in information management. The article suggests that firing the CIO is one way to pass accountability for information management to the business leaders since there will be no place for them to hide. They simply won’t be able to deflect the decisions to the CIO. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, CIO, Enterprise Data Management, Integration Competency Centers | Tagged , , , | Leave a comment

How Much is Poorly Managed Supplier Information Costing Your Business?

Supplier Information“Inaccurate, inconsistent and disconnected supplier information prohibits us from doing accurate supplier spend analysis, leveraging discounts, comparing and choosing the best prices, and enforcing corporate standards.”

This is quotation from a manufacturing company executive. It illustrates the negative impact that poorly managed supplier information can have on a company’s ability to cut costs and achieve revenue targets.

Many supply chain and procurement teams at large companies struggle to see the total relationship they have with suppliers across product lines, business units and regions. Why? Supplier information is scattered across dozens or hundreds of Enterprise Resource Planning (ERP) and Accounts Payable (AP) applications. Too much valuable time is spent manually reconciling inaccurate, inconsistent and disconnected supplier information in an effort to see the big picture. All this manual effort results in back office administrative costs that are higher than they should be.

Do these quotations from supply chain leaders and their teams sound familiar?

  • “We have 500,000 suppliers. 15-20% of our supplier records are duplicates. 5% are inaccurate.”
  • I get 100 e-mails a day questioning which supplier to use.”
  • “To consolidate vendor reporting for a single supplier between divisions is really just a guess.”
  • “Every year 1099 tax mailings get returned to us because of invalid addresses, and we play a lot of Schedule B fines to the IRS.”
  • “Two years ago we spent a significant amount of time and money cleansing supplier data. Now we are back where we started.”
Webinar, Supercharge Your Supply Chain Apps with Better Supplier Information

Join us for a Webinar to find out how to supercharge your supply chain applications with clean, consistent and connected supplier information

Please join me and Naveen Sharma, Director of the Master Data Management (MDM) Practice at Cognizant for a Webinar, Supercharge Your Supply Chain Applications with Better Supplier Information, on Tuesday, July 29th at 11 am PT.

During the Webinar, we’ll explain how better managing supplier information can help you achieve the following goals:

  1. Accelerate supplier onboarding
  2. Mitiate the risk of supply disruption
  3. Better manage supplier performance
  4. Streamline billing and payment processes
  5. Improve supplier relationship management and collaboration
  6. Make it easier to evaluate non-compliance with Service Level Agreements (SLAs)
  7. Decrease costs by negotiating favorable payment terms and SLAs

I hope you can join us for this upcoming Webinar!

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Quality, Manufacturing, Master Data Management | Tagged , , , , , , , , , , , , , , | Leave a comment

And Now, Time for Real Disruptive Innovation

Disruptive Innovation

Disruptive Innovation

Lately, there’s been a raging debate over Clayton Christensen’s definition of “disruptive innovation,” and whether this is the key to ultimate success in markets. Clayton, author of the ground-breaking book, The Innovator’s Dilemma, says industry-leading firms tend to pursue high-margin revenue-producing business, leaving lower end, less profitable parts of their markets to new, upstart players. For established leaders, there’s often not enough profit in selling to under-served or unserved markets. However, what happens is the upstarts end up gradually moving up the food chain with their new business models, eating gradually into leaders’ positions and either chasing them upstream, or out of business.

The interesting thing is that many of the upstarts do not even intend to take on the market leader in the segment. Christensen cites the classic example of Digital Equipment Corporation in the 1980s, which was unable to make the transition from large, expensive enterprise systems to smaller, PC-based equipment. The PC upstarts in this case did not take on Digital directly – rather they addressed unmet needs in another part of the market.

Christensen wrote and published The Innovator’s Dilemma more than 17 years ago, but his message keeps reverberating across the business world. Lately, Jill Lapore questioned some of thinking that has evolved around disruptive innovation in a recent New Yorker article. “Disruptive innovation is a theory about why businesses fail. It’s not more than that. It doesn’t explain change. It’s not a law of nature,” she writes. Christensen responded with a rebuttal to Lapore’s thesis, noting that “disruption doesn’t happen overnight,” and that “[Disruptive innovation] is not a theory about survivability.”

There is something Lapore points out that both she and Christensen can agree on: “disruption” is being oversold and misinterpreted on a wide scale these days. Every new product that rolls out is now branded as “disruptive.” As stated above, the true essence of disruption is creating new markets where the leaders would not tread.

Data itself can potentially be a source of disruption, as data analytics and information emerge as strategic business assets. While the ability to provide data analysis at real-time speeds, or make new insights possible isn’t disruption in the Christensen sense, we are seeing the rise of new business models built around data and information that could bring new leaders to the forefront. Data analytics can either play a role in supporting this movement, or data itself may be the new product or service disrupting existing markets.

We’ve already been seeing this disruption taking place within the publishing industry, for example – companies or sites providing real-time or near real-time services such as financial updates, weather forecasts and classified advertising have displaced traditional newspapers and other media as information sources.

Employing data analytics as a tool for insights never before available within an industry sector also may be part of disruptive innovation. Tesla Motors, for example, is disruptive to the automotive industry because it manufactures entirely electric cars. But the formula to its success is its employment of massive amounts of data from its array of vehicle in-devices to assure quality and efficiency.

Likewise, data-driven disruption may be occurring in places that may have been difficult to innovate. For example, it’s long been speculated that some of the digital giants, particularly Google, are poised to enter the long-staid insurance industry. If this were to happen, Google would not enter as a typical insurance company with a new web-based spin. Rather, the company would be employing new techniques of data gathering, insight and analysis to offer an entirely new model to consumers – one based on data. As Christopher Hernaes recently related in TechCrunch, Google’s ability to collect and mine data on homes, business and autos give it a unique value proposition n the industry’s value chain.

We’re in an era in which Christensen’s mode of disruptive innovation has become a way of life. Increasingly, it appears that enterprises that are adept and recognizing and acting upon the strategic potential of data may be joining the ranks of the disruptors.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, Data Transformation, Governance, Risk and Compliance | Leave a comment

The Business Case for Better Data Connectivity

andyAfter I graduated from business school, I started reading Fortune Magazine. I guess that I became a regular reader because each issue largely consists of a set of mini-business cases. And over the years, I have even started to read the witty remarks from the managing editor, Andy Serwer. However, this issue’s comments were even more evocative than usual.

Connectivity is perhaps the biggest opportunity of our time

Andy wrote, “Connectivity is perhaps the biggest opportunity of our time. As technology makes the world smaller, it is clear that the countries and companies that connect the best—either in terms of, say traditional infrastructure or through digital networks are in the drivers’ seat”. Andy sees differentiated connectivity as involving two elements–access and content. This is important to note because Andy believes the biggest winners going forward are going to be the best connectors to each.

Enterprises need to evaluate how the collect, refine, and make useful data

But how do enterprises establish world class connectivity to content? I would argue–whether you are talking about large or small data—it comes from improving an data connectivityenterprise’s abiity collect, refine, and create useful data. In recent CFO research, the importance of enterprise data gathering capabilities was stressed. CFOs said that their enterprises need to “get data right” at the same time as they confirmed that their enterprises in fact have a data issue. The CFOs said that they are worried about the integrity of data from the source forward. And once they manually create clean data, they worry about making this data useful to their enterprises. Why does this data matter so much to the CFO? Because as CFOs get more strategic, they are trying to make sure their firms drive synergies across their businesses.

Business need to make sense of data and get it to business users faster

One CFO said it this way, “data is potentially the only competitive advantage left”. Yet another said, “our businesses needs to make better decisions from data. We need to make sense of data faster.” At the same time leading edge thinkers like Geoffrey Moore has been mooresuggesting that businesses need to move from “systems of record” applications to “system of engagement” applications. This notion suggests the importance of providing more digestible apps, but also the importance of recognizing that the most important apps for business users will provide relevant information for decision making. Put another way, data is clearly becoming fuel to the enterprise decision making.

“Data Fueled Apps” will provide a connectivity advantage

For this reason, “data fueled” apps will be increasingly important to the business. Decision makers these days want to practice “management by walking around” to quote Tom Walking aroundPeter’s Book, “In Search of Excellence”. And this means having critical, fresh data at their fingertips for each and every meeting. And clearly, organizations that provide this type of data connectivity will establish the connectivity advantage that Serwer suggested in his editor comments. This of course applies to consumer facing apps as well. Server, also, comments on the impacts of Apple and Facebook. Most consumers today are far better informed before they make a purchase.  The customer facing apps, for example Amazon, that have led the way have provided the relevant information for the consumer to inform them on their purchase journey.

Delivering “Data Fueled Apps” to the Enterprise

But how do you create the enterprise wide connectivity to power the “Data Fueled Apps?”  It is clear from the CFOs comments work is needed here. That work involves creating data which is systematically clean, safe, and connected. Why does this data need to be clean? The CFOs we talked to said that when the data is not clean then they have to manually massage the data and then move from system to system. This is not providing the kind of system of engagement envisioned by Geoffrey Moore. What this CFO wants to move to a world where he can access the numbers easily, timely, and accurately”.

Data, also, needs to be safe. This means that only people with access should be able to see data whether we are talking about transactional or analytical data. This may sound obvious, but very few isolate and secure data as it moves from system to system. And lastly, data needs to be connected. Yet another CFO said, “the integration of the right systems to provide the right information needs to be done so we have the right information to manage and make decisions at the right time”. He continued by saying “we really care about technology integration and getting it less manual. It means that we can inspect the books half way through the cycle. And getting less manual means we can close the books even faster. However, if systems don’t talk (connect) to one another, it is a big issue”.

Finally, whether we are discussing big data or small data, we need to make sure the data collected is more relevant and easier to consume.  What is needed here is a data intelligence layer provides easy ways to locate useful data and recommend or guide ways to improve the data. This way analysts and leaders can spend less time on searching or preparing data and more time on analyzing the data to connect the business dots. This can involve mapping data relationship across all applications and being able to draw inferences from data to drive real time responses.

So in this new connected world, we need to first set up a data infrastructure to continuously make data clean, safe, and connected regardless of use case. It might not be needed to collect data, but the data infrastructure may be needed to define the connectivity (in the shape of access and content). We also need to make sure that the infrastructure for doing this is reusable so that the time from concept to new data fueled app is minimized. And then to drive informational meaning, we need to layer on top the intelligence. With this, we can deliver “data fueled apps” that enable business users the access and content to drive better business differentiation and decisioning!

FacebookTwitterLinkedInEmailPrintShare
Posted in Architects, Business/IT Collaboration, CIO, Data Quality, Data Security | Tagged | Leave a comment

To Engage Business, Focus on Information Management rather than Data Management

Focus on Information Management

Focus on Information Management

IT professionals have been pushing an Enterprise Data Management agenda for decades rather than Information Management and are frustrated with the lack of business engagement. So what exactly is the difference between Data Management and Information Management and why does it matter? (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Architects, Business Impact / Benefits, Business/IT Collaboration, CIO, Data Governance, Data Integration, Enterprise Data Management, Integration Competency Centers, Master Data Management | Tagged , , , , | Leave a comment

Health Plans, Create Competitive Differentiation with Risk Adjustment

improve risk adjustmentExploring Risk Adjustment as a Source of Competitive Differentiation

Risk adjustment is a hot topic in healthcare. Today, I interviewed my colleague, Noreen Hurley to learn more. Noreen tell us about your experience with risk adjustment.

Before I joined Informatica I worked for a health plan in Boston. I managed several programs  including CMS Five Start Quality Rating System and Risk Adjustment Redesign.  We recognized the need for a robust diagnostic profile of our members in support of risk adjustment. However, because the information resides in multiple sources, gathering and connecting the data presented many challenges. I see the opportunity for health plans to transform risk adjustment.

As risk adjustment becomes an integral component in healthcare, I encourage health plans to create a core competency around the development of diagnostic profiles. This should be the case for health plans and ACO’s.  This profile is the source of reimbursement for an individual. This profile is also the basis for clinical care management.  Augmented with social and demographic data, the profile can create a roadmap for successfully engaging each member.

Why is risk adjustment important?

Risk Adjustment is increasingly entrenched in the healthcare ecosystem.  Originating in Medicare Advantage, it is now applicable to other areas.  Risk adjustment is mission critical to protect financial viability and identify a clinical baseline for  members.

What are a few examples of the increasing importance of risk adjustment?

1)      Centers for Medicare and Medicaid (CMS) continues to increase the focus on Risk Adjustment. They are evaluating the value provided to the Federal government and beneficiaries.  CMS has questioned the efficacy of home assessments and challenged health plans to provide a value statement beyond the harvesting of diagnoses codes which result solely in revenue enhancement.   Illustrating additional value has been a challenge. Integrating data across the health plan will help address this challenge and derive value.

2)      Marketplace members will also require risk adjustment calculations.  After the first three years, the three “R’s” will dwindle down to one ‘R”.  When Reinsurance and Risk Corridors end, we will be left with Risk Adjustment. To succeed with this new population, health plans need a clear strategy to obtain, analyze and process data.  CMS processing delays make risk adjustment even more difficult.  A Health Plan’s ability to manage this information  will be critical to success.

3)      Dual Eligibles, Medicaid members and ACO’s also rely on risk management for profitability and improved quality.

With an enhanced diagnostic profile — one that is accurate, complete and shared — I believe it is possible to enhance care, deliver appropriate reimbursements and provide coordinated care.

How can payers better enable risk adjustment?

  • Facilitate timely analysis of accurate data from a variety of sources, in any  format.
  • Integrate and reconcile data from initial receipt through adjudication and  submission.
  • Deliver clean and normalized data to business users.
  • Provide an aggregated view of master data about members, providers and the relationships between them to reveal insights and enable a differentiated level of service.
  • Apply natural language processing to capture insights otherwise trapped in text based notes.

With clean, safe and connected data,  health plans can profile members and identify undocumented diagnoses. With this data, health plans will also be able to create reports identifying providers who would benefit from additional training and support (about coding accuracy and completeness).

What will clean, safe and connected data allow?

  • Allow risk adjustment to become a core competency and source of differentiation.  Revenue impacts are expanding to lines of business representing larger and increasingly complex populations.
  • Educate, motivate and engage providers with accurate reporting.  Obtaining and acting on diagnostic data is best done when the member/patient is meeting with the caregiver.  Clear and trusted feedback to physicians will contribute to a strong partnership.
  • Improve patient care, reduce medical cost, increase quality ratings and engage members.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Data Governance, Data Integration, Enterprise Data Management, Healthcare, Master Data Management, Operational Efficiency | Tagged , , | Leave a comment