Category Archives: Healthcare

INFAgraphic: Transforming Healthcare by Putting Information to Work

ebookHealthInfagraphicforblog

FacebookTwitterLinkedInEmailPrintShare
Posted in Healthcare | Tagged , , , | Leave a comment

Death of the Data Scientist: Silver Screen Fiction?

Maybe the word “death” is a bit strong, so let’s say “demise” instead.  Recently I read an article in the Harvard Business Review around how Big Data and Data Scientists will rule the world of the 21st century corporation and how they have to operate for maximum value.  The thing I found rather disturbing was that it takes a PhD – probably a few of them – in a variety of math areas to give executives the necessary insight to make better decisions ranging from what product to develop next to who to sell it to and where.

Who will walk the next long walk.... (source: Wikipedia)

Who will walk the next long walk…. (source: Wikipedia)

Don’t get me wrong – this is mixed news for any enterprise software firm helping businesses locate, acquire, contextually link, understand and distribute high-quality data.  The existence of such a high-value role validates product development but it also limits adoption.  It is also great news that data has finally gathered the attention it deserves.  But I am starting to ask myself why it always takes individuals with a “one-in-a-million” skill set to add value.  What happened to the democratization  of software?  Why is the design starting point for enterprise software not always similar to B2C applications, like an iPhone app, i.e. simpler is better?  Why is it always such a gradual “Cold War” evolution instead of a near-instant French Revolution?

Why do development environments for Big Data not accommodate limited or existing skills but always accommodate the most complex scenarios?  Well, the answer could be that the first customers will be very large, very complex organizations with super complex problems, which they were unable to solve so far.  If analytical apps have become a self-service proposition for business users, data integration should be as well.  So why does access to a lot of fast moving and diverse data require scarce PIG or Cassandra developers to get the data into an analyzable shape and a PhD to query and interpret patterns?

I realize new technologies start with a foundation and as they spread supply will attempt to catch up to create an equilibrium.  However, this is about a problem, which has existed for decades in many industries, such as the oil & gas, telecommunication, public and retail sector. Whenever I talk to architects and business leaders in these industries, they chuckle at “Big Data” and tell me “yes, we got that – and by the way, we have been dealing with this reality for a long time”.  By now I would have expected that the skill (cost) side of turning data into a meaningful insight would have been driven down more significantly.

Informatica has made a tremendous push in this regard with its “Map Once, Deploy Anywhere” paradigm.  I cannot wait to see what’s next – and I just saw something recently that got me very excited.  Why you ask? Because at some point I would like to have at least a business-super user pummel terabytes of transaction and interaction data into an environment (Hadoop cluster, in memory DB…) and massage it so that his self-created dashboard gets him/her where (s)he needs to go.  This should include concepts like; “where is the data I need for this insight?’, “what is missing and how do I get to that piece in the best way?”, “how do I want it to look to share it?” All that is required should be a semi-experienced knowledge of Excel and PowerPoint to get your hands on advanced Big Data analytics.  Don’t you think?  Do you believe that this role will disappear as quickly as it has surfaced?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, CIO, Customer Acquisition & Retention, Customer Services, Data Aggregation, Data Integration, Data Integration Platform, Data Quality, Data Warehousing, Enterprise Data Management, Financial Services, Healthcare, Life Sciences, Manufacturing, Master Data Management, Operational Efficiency, Profiling, Scorecarding, Telecommunications, Transportation, Uncategorized, Utilities & Energy, Vertical | Tagged , , , , | 1 Comment

Murphy’s First Law of Bad Data – If You Make A Small Change Without Involving Your Client – You Will Waste Heaps Of Money

I have not used my personal encounter with bad data management for over a year but a couple of weeks ago I was compelled to revive it.  Why you ask? Well, a complete stranger started to receive one of my friend’s text messages – including mine – and it took days for him to detect it and a week later nobody at this North American wireless operator had been able to fix it.  This coincided with a meeting I had with a European telco’s enterprise architecture team.  There was no better way to illustrate to them how a customer reacts and the risk to their operations, when communication breaks down due to just one tiny thing changing – say, his address (or in the SMS case, some random SIM mapping – another type of address).

Imagine the cost of other bad data (thecodeproject.com)

Imagine the cost of other bad data (thecodeproject.com)

In my case, I  moved about 250 miles within the United States a couple of years ago and this seemingly common experience triggered a plethora of communication screw ups across every merchant a residential household engages with frequently, e.g. your bank, your insurer, your wireless carrier, your average retail clothing store, etc.

For more than two full years after my move to a new state, the following things continued to pop up on a monthly basis due to my incorrect customer data:

  • In case of my old satellite TV provider they got to me (correct person) but with a misspelled last name at my correct, new address.
  • My bank put me in a bit of a pickle as they sent “important tax documentation”, which I did not want to open as my new tenants’ names (in the house I just vacated) was on the letter but with my new home’s address.
  • My mortgage lender sends me a refinancing offer to my new address (right person & right address) but with my wife’s as well as my name completely butchered.
  • My wife’s airline, where she enjoys the highest level of frequent flyer status, continually mails her offers duplicating her last name as her first name.
  • A high-end furniture retailer sends two 100-page glossy catalogs probably costing $80 each to our address – one for me, one for her.
  • A national health insurer sends “sensitive health information” (disclosed on envelope) to my new residence’s address but for the prior owner.
  • My legacy operator turns on the wrong premium channels on half my set-top boxes.
  • The same operator sends me a SMS the next day thanking me for switching to electronic billing as part of my move, which I did not sign up for, followed by payment notices (as I did not get my invoice in the mail).  When I called this error out for the next three months by calling their contact center and indicating how much revenue I generate for them across all services, they counter with “sorry, we don’t have access to the wireless account data”, “you will see it change on the next bill cycle” and “you show as paper billing in our system today”.

Ignoring the potential for data privacy law suits, you start wondering how long you have to be a customer and how much money you need to spend with a merchant (and they need to waste) for them to take changes to your data more seriously.  And this are not even merchants to whom I am brand new – these guys have known me and taken my money for years!

One thing I nearly forgot…these mailings all happened at least once a month on average, sometimes twice over 2 years.  If I do some pigeon math here, I would have estimated the postage and production cost alone to run in the hundreds of dollars.

However, the most egregious trespass though belonged to my home owner’s insurance carrier (HOI), who was also my mortgage broker.  They had a double whammy in store for me.  First, I received a cancellation notice from the HOI for my old residence indicating they had cancelled my policy as the last payment was not received and that any claims will be denied as a consequence.  Then, my new residence’s HOI advised they added my old home’s HOI to my account.

After wondering what I could have possibly done to trigger this, I called all four parties (not three as the mortgage firm did not share data with the insurance broker side – surprise, surprise) to find out what had happened.

It turns out that I had to explain and prove to all of them how one party’s data change during my move erroneously exposed me to liability.  It felt like the old days, when seedy telco sales people needed only your name and phone number and associate it with some sort of promotion (back of a raffle card to win a new car), you never took part in, to switch your long distance carrier and present you with a $400 bill the coming month.  Yes, that also happened to me…many years ago.  Here again, the consumer had to do all the legwork when someone (not an automatic process!) switched some entry without any oversight or review triggering hours of wasted effort on their and my side.

We can argue all day long if these screw ups are due to bad processes or bad data, but in all reality, even processes are triggered from some sort of underlying event, which is something as mundane as a database field’s flag being updated when your last purchase puts you in a new marketing segment.

Now imagine you get married and you wife changes her name. With all these company internal (CRM, Billing, ERP),  free public (property tax), commercial (credit bureaus, mailing lists) and social media data sources out there, you would think such everyday changes could get picked up quicker and automatically.  If not automatically, then should there not be some sort of trigger to kick off a “governance” process; something along the lines of “email/call the customer if attribute X has changed” or “please log into your account and update your information – we heard you moved”.  If American Express was able to detect ten years ago that someone purchased $500 worth of product with your credit card at a gas station or some lingerie website, known for fraudulent activity, why not your bank or insurer, who know even more about you? And yes, that happened to me as well.

Tell me about one of your “data-driven” horror scenarios?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Business Impact / Benefits, Business/IT Collaboration, Complex Event Processing, Customer Acquisition & Retention, Customer Services, Customers, Data Aggregation, Data Governance, Data Privacy, Data Quality, Enterprise Data Management, Financial Services, Governance, Risk and Compliance, Healthcare, Master Data Management, Retail, Telecommunications, Uncategorized, Vertical | Tagged , , , , , , , , , | Leave a comment

Where Is My Broadband Insurance Bundle?

As I continue to counsel insurers about master data, they all agree immediately that it is something they need to get their hands around fast.  If you ask participants in a workshop at any carrier; no matter if life, p&c, health or excess, they all raise their hands when I ask, “Do you have broadband bundle at home for internet, voice and TV as well as wireless voice and data?”, followed by “Would you want your company to be the insurance version of this?”

Buying insurance like broadband

Buying insurance like broadband

Now let me be clear; while communication service providers offer very sophisticated bundles, they are also still grappling with a comprehensive view of a client across all services (data, voice, text, residential, business, international, TV, mobile, etc.) each of their touch points (website, call center, local store).  They are also miles away of including any sort of meaningful network data (jitter, dropped calls, failed call setups, etc.)

Similarly, my insurance investigations typically touch most of the frontline consumer (business and personal) contact points including agencies, marketing (incl. CEM & VOC) and the service center.  On all these we typically see a significant lack of productivity given that policy, billing, payments and claims systems are service line specific, while supporting functions from developing leads and underwriting to claims adjucation often handle more than one type of claim.

This lack of performance is worsened even more by the fact that campaigns have sub-optimal campaign response and conversion rates.  As touchpoint-enabling CRM applications also suffer from a lack of complete or consistent contact preference information, interactions may violate local privacy regulations. In addition, service centers may capture leads only to log them into a black box AS400 policy system to disappear.

Here again we often hear that the fix could just happen by scrubbing data before it goes into the data warehouse.  However, the data typically does not sync back to the source systems so any interaction with a client via chat, phone or face-to-face will not have real time, accurate information to execute a flawless transaction.

On the insurance IT side we also see enormous overhead; from scrubbing every database from source via staging to the analytical reporting environment every month or quarter to one-off clean up projects for the next acquired book-of-business.  For a mid-sized, regional carrier (ca. $6B net premiums written) we find an average of $13.1 million in annual benefits from a central customer hub.  This figure results in a ROI of between 600-900% depending on requirement complexity, distribution model, IT infrastructure and service lines.  This number includes some baseline revenue improvements, productivity gains and cost avoidance as well as reduction.

On the health insurance side, my clients have complained about regional data sources contributing incomplete (often driven by local process & law) and incorrect data (name, address, etc.) to untrusted reports from membership, claims and sales data warehouses.  This makes budgeting of such items like medical advice lines staffed  by nurses, sales compensation planning and even identifying high-risk members (now driven by the Affordable Care Act) a true mission impossible, which makes the life of the pricing teams challenging.

Over in the life insurers category, whole and universal life plans now encounter a situation where high value clients first faced lower than expected yields due to the low interest rate environment on top of front-loaded fees as well as the front loading of the cost of the term component.  Now, as bonds are forecast to decrease in value in the near future, publicly traded carriers will likely be forced to sell bonds before maturity to make good on term life commitments and whole life minimum yield commitments to keep policies in force.

This means that insurers need a full profile of clients as they experience life changes like a move, loss of job, a promotion or birth.   Such changes require the proper mitigation strategy, which can be employed to protect a baseline of coverage in order to maintain or improve the premium.  This can range from splitting term from whole life to using managed investment portfolio yields to temporarily pad premium shortfalls.

Overall, without a true, timely and complete picture of a client and his/her personal and professional relationships over time and what strategies were presented, considered appealing and ultimately put in force, how will margins improve?  Surely, social media data can help here but it should be a second step after mastering what is available in-house already.  What are some of your experiences how carriers have tried to collect and use core customer data?

Disclaimer:
Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customer Services, Customers, Data Governance, Data Privacy, Data Quality, Data Warehousing, Enterprise Data Management, Governance, Risk and Compliance, Healthcare, Master Data Management, Vertical | Tagged , , , , , , , , | Leave a comment

Squeezing the Value out of the Old Annoying Orange

I believe that most in the software business believe that it is tough enough to calculate and hence financially justify the purchase or build of an application - especially middleware – to a business leader or even a CIO.  Most of business-centric IT initiatives involve improving processes (order, billing, service) and visualization (scorecarding, trending) for end users to be more efficient in engaging accounts.  Some of these have actually migrated to targeting improvements towards customers rather than their logical placeholders like accounts.  Similar strides have been made in the realm of other party-type (vendor, employee) as well as product data.  They also tackle analyzing larger or smaller data sets and providing a visual set of clues on how to interpret historical or predictive trends on orders, bills, usage, clicks, conversions, etc.

Squeeze that Orange

Squeeze that Orange

If you think this is a tough enough proposition in itself, imagine the challenge of quantifying the financial benefit derived from understanding where your “hardware” is physically located, how it is configured, who maintained it, when and how.  Depending on the business model you may even have to figure out who built it or owns it.  All of this has bottom-line effects on how, who and when expenses are paid and revenues get realized and recognized.  And then there is the added complication that these dimensions of hardware are often fairly dynamic as they can also change ownership and/or physical location and hence, tax treatment, insurance risk, etc.

Such hardware could be a pump, a valve, a compressor, a substation, a cell tower, a truck or components within these assets.  Over time, with new technologies and acquisitions coming about, the systems that plan for, install and maintain these assets become very departmentalized in terms of scope and specialized in terms of function.  The same application that designs an asset for department A or region B, is not the same as the one accounting for its value, which is not the same as the one reading its operational status, which is not the one scheduling maintenance, which is not the same as the one billing for any repairs or replacement.  The same folks who said the Data Warehouse is the “Golden Copy” now say the “new ERP system” is the new central source for everything.  Practitioners know that this is either naiveté or maliciousness. And then there are manual adjustments….

Moreover, to truly take squeeze value out of these assets being installed and upgraded, the massive amounts of data they generate in a myriad of formats and intervals need to be understood, moved, formatted, fixed, interpreted at the right time and stored for future use in a cost-sensitive, easy-to-access and contextual meaningful way.

I wish I could tell you one application does it all but the unsurprising reality is that it takes a concoction of multiple.  None or very few asset life cycle-supporting legacy applications will be retired as they often house data in formats commensurate with the age of the assets they were built for.  It makes little financial sense to shut down these systems in a big bang approach but rather migrate region after region and process after process to the new system.  After all, some of the assets have been in service for 50 or more years and the institutional knowledge tied to them is becoming nearly as old.  Also, it is probably easier to engage in often required manual data fixes (hopefully only outliers) bit-by-bit, especially to accommodate imminent audits.

So what do you do in the meantime until all the relevant data is in a single system to get an enterprise-level way to fix your asset tower of Babel and leverage the data volume rather than treat it like an unwanted step child?  Most companies, which operate in asset, fixed-cost heavy business models do not want to create a disruption but a steady tuning effect (squeezing the data orange), something rather unsexy in this internet day and age.  This is especially true in “older” industries where data is still considered a necessary evil, not an opportunity ready to exploit.  Fact is though; that in order to improve the bottom line, we better get going, even if it is with baby steps.

If you are aware of business models and their difficulties to leverage data, write to me.  If you even know about an annoying, peculiar or esoteric data “domain”, which does not lend itself to be easily leveraged, share your thoughts.  Next time, I will share some examples on how certain industries try to work in this environment, what they envision and how they go about getting there.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customers, Data Governance, Data Quality, Enterprise Data Management, Governance, Risk and Compliance, Healthcare, Life Sciences, Manufacturing, Master Data Management, Mergers and Acquisitions, Operational Efficiency, Product Information Management, Profiling, Telecommunications, Transportation, Utilities & Energy, Vertical | 1 Comment

Improving CMS Star Ratings… The Secret Sauce

Many of our customers are Medicare health plans and one thing that keeps coming up in conversation is how they can transform business processes to improve star ratings. For plans covering health services, the overall score for quality of those services covers 36 different topics in 5 categories:

1. Staying healthy: screenings, tests, and vaccines

2. Managing chronic (long-term) conditions

3. Member experience with the health plan

4. Member complaints, problems getting services, and improvement in the health plan’s performance

5. Health plan customer service

Based on member feedback and activity in each of these areas, the health plans receive a rating (1-5 stars) which is published and made available to consumers. These ratings play a critical role in plan selection each Fall. The rating holds obvious value as consumers are increasingly “yelp minded,” meaning they look to online reviews from peer groups to make buying decisions. Even with this realization though, improving ratings is a challenge. There are the typical complexities of any survey: capturing a representative respondent pool, members may be negatively influenced by a single event and there are commonly emotional biases. There are also less obvious challenges associated with the data.

For example, a member with CHF may visit north of 8 providers in a month and they may or may not follow through on prescribed preventative care measures. How does CMS successfully capture the clinical and administrative data on each of these visits when patient information may be captured differently at each location? How does the health plan ensure that the CMS interpretation matches their interpretation of the visit data? In many cases, our customers have implemented an enterprise data warehouse and are doing some type of claims analysis but this analysis requires capturing new data and analyzing data in new ways.

We hear that those responsible for member ratings, retention and acquisition routinely wait >6 months to have a source or data added to a reporting database. The cycle time is too great to make a quick and meaningful impact on the ratings.

Let’s continue this discussion next week during your morning commute.

Join me as I talk with Frank Norman a Healthcare Partners at Knowledgent.

During this “drive time” webinar series, health plans will learn how to discover insights to improve CMS Star ratings.

Part 1 of the webinar series: Top 5 Reasons Why Improving CMS Star Ratings is a Challenge

Part 2 of the webinar series: Using Your Data to Improve CMS Star Ratings

Part 3 of the webinar series: Automating Insights into CMS Star Ratings

FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, Big Data, CIO, Customers, Data Warehousing, Enterprise Data Management, Healthcare | Tagged , , | Leave a comment

What’s worse, a disaster or your company’s delayed response?

Just in time for Halloween, I’m sharing a scary story. Warning: this is a true story. You may wonder:

  • Could this happen to me?
  • Can this situation be avoided?
  • How can I prevent this from happening to me?

Last summer, the worst wildfire in Colorado history burned hundreds of acres, 360 homes, killing two people and forcing 38,000 people to evacuate the area.

Colorado Wildfire 2013

When disaster strikes, how long will it take your organization to identify, contact and communicate with employees who need to be evacuated or deployed elsewhere? It took one company 2 days. Photo credit: Helen H. Richardson, The Denver Post

Unfortunately, it was during the Colorado wildfire that a large integrated healthcare provider with hospitals, doctors, healthcare providers and employees located throughout the United States (who shall remain nameless) realized they had a problem. They couldn’t respond in real time to the disaster by mobilizing their workforce quickly. They struggled to identify, contact and communicate with doctors, healthcare providers and employees located at the disaster area to warn them not to go to the hospital or redirect them to alternative sites where they could help.

This healthcare provider’s inability to respond to this disaster in real time was an “Aha” moment. What was holding them back was a major information problem. Because their employee information was scattered across hundreds of systems, they couldn’t pull a single, comprehensive and accurate list of doctors, healthcare providers and employees in the disaster area. They didn’t know which employees needed to be evacuated or which could be sent to assist people in other locations. So, they had to email everyone in the company.

When Disaster Strikes Blog Call OutThe good news is that we’re in the process of helping them create and maintain a central location called an “employee master” built on our  data integration, data quality, and master data management (MDM) software. This will be their “go-to” place for an up-to-date, complete and accurate list of employees and their contact information, such as work email, phone, pager (doctors still use them), home phone and personal email as well as their location, so they know exactly who is working where and how best to contact them.

This healthcare provider will no longer be held back by an information problem. In three months, they’ll be able to respond to disasters in real time by mobilizing their workforce quickly.

An interesting side note: Immediately before our Informatica team of experts arrived to talk to this healthcare provider about how we can help them, there was a power outage in the building.  They struggled to alert the employees who were impacted. So our team personally experienced the pain of this organization’s employee information problem.

When disaster strikes, will you be ready to respond in real time? Or do you have an information problem that could hold you back from mobilizing your own employees?

I want your opinion. Are you interested in more scary stories? Let me know in the comments below. I’m thinking about making this a regular series.

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Quality, Enterprise Data Management, Healthcare, Master Data Management, Uncategorized | Tagged , , , | Leave a comment

The Five Days of National Health IT Week – Day 5

I started by observing that healthcare IT finds ourselves in a rather remarkable place as we look to Health IT Week as a focus for reflection. The adoption of EHR’s is widespread and these applications are facilitating a rapidly-growing trove of data with virtually unimaginable potential. As an industry, we’re poised to take the lessons-learn from other industries that have gone before us and realize the full potential of our data much more rapidly and without the failures we would otherwise experience. The promise of big data looms large in terms of the potential to drive a shift from the treatment of disease to the promotion and management of health. And lastly, the beginnings of a shift to paying for value rather than activity are starting to align financial motivations with clinical quality to drive unprecedented quality, efficiency, safety and value from what can – and should be—the best healthcare available anywhere in the world.

So after four days I think it’s fair to put in a plug for Informatica and the role we have to play in helping transform healthcare. When I speak on our vision for data driven healthcare, I highlight three core capabilities that are required for healthcare organizations to realize the full potential of their data:

  1. Data integration must be a core competency. Organizations must be able to rapidly connect to new sources of data; profile the quality of the data and apply data quality rules to eliminate incorrect and erroneous data from being used to make decisions; map and transform this high-quality data into new formats; and load the cleansed data into a wide variety of target systems that may include an enterprise data warehouse. All of this work must be accomplished in an environment that provides end-to-end transparency from source to target of everything that has been done to the data along the way, since it is only with this transparency that business and clinical end users can build the faith in the data required to move from questioning it’s accuracy to using data as the basis for rapid and effective decision-making.
  2. All healthcare participants and relationships must be clearly understood and documented. Delivering efficient, high-value care requires that we know with confidence who all the participants in the healthcare process are, and to measure performance and be proactive also requires that we know the relationships between the participants. For instance, knowing that John Smith and Jonnie Smyth are the same patient is key achieving the 360 degree view of the patient required for making the right healthcare decisions for John and managing his health. Similarly, knowing that Geoff Johnsen MD and Jeff Johnson MD are in-fact the same provider is equally important if we are going to capture his practice patterns and profile the care he provides. But equally, if not more important, to becoming data driven is knowing with confidence that Dr. Johnsen is Mr. Symth’s primary care physician. With this fact, we can now quickly attribute patients to providers for the purposes of quality and outcome reporting; deliver proactive alerts to Dr. Johnsen whenever one of his patients goes to the emergency room, etc. And this concept of trustworthy data about participants and relationships cannot be limited simply to patients and providers – it must include employees, members, locations, implantable medical devices or legal entities –basically anything that we want to develop a 360 degree view around must be managed as trustworthy master data so that when the data is needed, it can just be consumed without fear of errors.
  3. Organizations must be able to take action on what they know. Healthcare delivery is fragmented across organizations, facilities, hospitals, doctors, specialists and departments, to name just a few. Our information systems are similarly fragmented having grown up supporting the silos of care delivery, and these facts make it very difficult for these same information systems to deliver the decision support required to promote coordinated care that spans care settings. To be successful, healthcare organizations need a solution that can delivery alerts in near-real-time (during the moment of interaction with the patient) that provide health maintenance reminders; deliver key information such as a warning that a patient should not be discharged with a particular set of unresulted studies; or push discharge history and care plans to the point of care as a means of reducing avoidable readmissions. This sort of “decision support in the white space” is not a replacement for more traditional EHR-based features, but rather a complementary capability to bridge the gap between systems that exist both between, and within, organizations.

With this perspective, I will conclude my series of thoughts on National Healthcare IT Week with the final observation that all-in-all I think as healthcare IT professionals we’ve has a pretty good week.

And I really can’t wait for what the next few years have in store for us!

FacebookTwitterLinkedInEmailPrintShare
Posted in Healthcare | Leave a comment

The Five Days of National Health IT Week – Day 4

Here it is day four of National Health IT Week and there have been lots of interesting things to talk about and be excited for the prospects of Health IT up to this point: EHRs capturing lots of great data; analytics and predictive modeling to discover insights we’ve not been able to have previously; and where the hype of big data meets reality. But having access to data and being able to do cool things with it to deliver higher value care is only half the story – the other half is how our reimbursement system is also changing to align financial incentives with quality and value.

We’re seeing the early beginnings of value-based reimbursement changing behavior, with one of the more intriguing being the Center for Medicare and Medicaid Services (CMS) Hospital Readmission Reduction Program (HRRP). Under this program, CMS calculates a risk-adjusted ‘expected readmission rate’ for each hospital in the country for congestive heart failure (CHF), acute myocardial infarction (AMI) and pneumonia. If a hospital’s actual readmission rate exceeds the expected rate, their Medicare reimbursements are penalized for the following year by up to 1% in 2013, 2% in 2014, and 3% in 2015. Given that hospitals typically receive 50% or more of their revenue from CMS, these penalties rapidly add up to real money and are prompting meaningful action on the part of hospitals to understand the causes of readmissions and address them proactively. But what’s intriguing is to get beyond thinking of the HRRP as a penalty-imposing program, and instead think of it as an outcomes-based reimbursement program.

What CMS has really done is effectively say the payment for a CHF admission, for example, is not just for the individual admission, but rather a payment to keep the patient out of the hospital for the next 30 days. And not just keep them out of the hospital for CHF, but for any reason since ‘readmission’ is all-inclusive of most any reason for the readmission and not limited to just the original reason for discharge. This requires some radical rethinking of how hospitals and health systems measure quality and value, since it brings to bear things they have traditionally had little control over such as patient compliance with discharge instructions and adherence to best-practices by physicians in the community. Even though it may not be directly under the hospital’s control, hospitals for the first time have a meaningful financial incentive to encourage patients and others to ensure appropriate follow-up care and activities are accomplished.

Tomorrow we will be a quick wrap-up and a pat on the back for all the tireless effort of an entire industry in the midst of unprecedented change.

FacebookTwitterLinkedInEmailPrintShare
Posted in Healthcare | Leave a comment

The Five Days of National Health IT Week – Day 3

So it’s ‘hump day’ of National Health IT Week and we’ve already talked about how EHR’s are capturing a treasure-trove of rich data, and the burgeoning enthusiasm for analytics and predictive modeling that is nature consequence of having this data.

But what about the whole “big data” thing? I’ve been on the fence as less than enthusiastic about all the hand waving and bell ringing surrounding the big data movement in healthcare, simply based upon our inability as an industry to really do anything particularly useful with the “little data” we already have. We need look no further than all of the angst and heartfelt bickering over the very modest data requirements of Meaningful Use Stage I requirements to validate this assessment of our industry readiness for “big data”. This isn’t saying that there have not been pockets of incredibly talented individuals, applying extraordinary effort, to do cool things with data, and yielding some glimmers of hope for “big data”. But as an industry, we have not really done much with the data we already have to understand what works, and doesn’t work, and use that insight to change our behavior. And when it comes to data, I don’t think you can run before you can walk, and in my mind big data is Olympic-caliber running. It’s also important to consider my opinion only in the context of strongly structured discrete data like lab results and coded clinical data entered into EHRs. This distinction is relevant since when it comes to things like digital imaging, PACS, and advanced visualization algorithms, the broader healthcare market has arguably been dealing with “big data” for more than a decade.

But I’m beginning to change my mind on big data in healthcare. And rather than walking before you run, big data may present on opportunity to leap ahead in deriving value from data in very focused areas, even before full competence in ‘small data’ analytics has been achieved. The reason for my change in thinking is really related to (a) an evolving understanding of how big data technologies can be applied to existing data problems, and (b) compelling new sources of data, and potential solutions, that have never before been possible.

Big data isn’t just about doing analysis of twitter feeds and facebook posts (which encompass all three V’s of big data – volume, variety and velocity) it can also be about having the cost-effective processing horsepower to do much more sophisticated analytics on the clinical or billing data we already have. Or the data that we’re going to have from our EHRs. Rather than testing clinical or financial models against a month’s worth of data, or a quarter’s worth of data, now we can test those same models against a year, or five, or a decade’s worth of data. This same processing horsepower means analysis that might have taken days or weeks can now be done in minutes or hours, which makes the results that much more valuable in impacting clinical care and changing frontline staff behaviors. And to the extent these big data technologies can be adopted and applied to today’s data analytics needs by mere mortals, then all the better. For example, Hadoop has been a centerpiece in the whole big data hype circus, and historically has been a complex beast that can only be mastered by the most advanced and sophisticated IT shops. And who in their right mind with all the other challenges facing healthcare IT (ICD-10 conversion, EHR implementation, HIPPA privacy audits, flat budgets, health information exchanges, etc.) wants to try something like that? But what we’re seeing is a maturing of the Hadoop technology stack and a vendor ecosystem developing around the platform with solutions that make it much more practical that healthcare organizations will be able to try out some of these big data solutions. For example, Informatica has recently announced support for Hadoop such that any transformations or data quality rules created in Informatica can be run on Hadoop unchanged – taking advantage of the lower cost and higher performance of the Hadoop platform while avoiding much of the complexity and specialized skills that have traditionally been a huge barrier to adoption. One potential area this sort of approach could be applied is crowdsourcing medical decisions – a topic I have previously written about that I think has very real implications for the providers aspiring to become “learning healthcare organizations” which you can read here.

There was also a good report from InformationWeek titled 2013 Healthcare IT Priorities that observed a growing proliferation of data coming from mobile devices and personal medical monitors, which in my mind will inevitably hit all three V’s (volume, velocity and variety) that traditionally define big data. They further state that we are rapidly heading towards a future where acquiring data ceases to be the problem, but figuring out what to do with it becomes the real challenge.

In this same vein, I also believe that some of some of the more consumer-oriented “big data things” have potential promise in healthcare. The creation of an individual insurance market is going to drive a tectonic shift in the perspective of payors as they reorient their sales and marketing from selling coverage in large chunks to employers, and instead need to understand and target the far more finicky individual consumer with a very different perspective on value and customer service than their traditional employer buyer. In this situation, all those social media feeds and the sentiment analysis they can reveal become very compelling with a demonstrable ROI, as does old-fashioned analytics on big data such as web click-stream analytics to optimize the consumer experience on their websites.

There is also clear potential in combining consumer data with clinical data to provide a more complete 360 degree view of the patient for providers – bringing key information such as what over-the-counter drugs a patient may have bought over the last 30-60-90 days that may have a marked adverse reaction with a prescription they are taking, or just be clinically undesirable (such as someone with hypertension taking an OTC antihistamine for example). Providing this insight to their provider at the time of the patient’s next visit – or better still, apply rules in near-real-time as soon as the data becomes available and alerting the patient’s care team even when no visit is scheduled – can really change the role of physicians in orchestrating a patient’s health and wellness rather than simply treating symptoms and disease during an office visit or hospital admission.

Tomorrow we will move on to what is motivating healthcare providers to finally take a genuine interest in analytics and business intelligence, with that thing being align financial incentives.

FacebookTwitterLinkedInEmailPrintShare
Posted in Healthcare | Leave a comment