Tag Archives: Data Quality
The Physician Payments Sunshine Act shines a spotlight on the disorganized state of physician information, which is scattered across systems, often incomplete, inaccurate and inconsistent in most pharmaceutical and medical device manufacturing companies.
According to the recent Wall Street Journal article Doctors Face New Scrutiny over Gifts, “Drug companies collectively pay hundreds of millions of dollars in fees and gifts to doctors every year. In 2012, Pfizer Inc., the biggest drug maker by sales, paid $173.2 million to U.S. health-care professionals.”
The Risks of Creating Reports with Inaccurate Physician Information
There are serious risks of filing inaccurate reports. Just imagine dealing with:
- An angry call from a physician who received a $25 meal, which was inaccurately reported as $250 or who reportedly, received a gift that actually went to someone with a similar name.
- Hefty fines and increased scrutiny from the Centers for Medicare and Medicaid Services (CMS). Fines range from $1,000 to $10,000 for each transaction with a maximum penalty of maximum $1.15 million.
- Negative media attention. Reports will be available for anyone to access on a publicly accessible website.
How prepared are manufacturers to track and report physician payment information?
One of the major obstacles is getting a complete picture of the total payments made to one physician. Manufacturers need to know if Dr. Sriram Mennon and Dr. Sri Menon are one and the same.
On top of that, they need to understand the complicated connections between Dr. Sriram Menon, sales representatives’ expense report spreadsheets (T&E), marketing and R&D expenses, event data, and accounts payable data.
3 Steps to Ensure Physician Information is Accurate
In recent years, some pharmaceutical manufacturers and medical device manufacturers were required to respond to “Sunshine Act” type laws in states like California and Massachusetts. To simplify, automate and ensure physician payment reports are filed correctly and on time, they use an Aggregate Spend Repository or Physician Spend Management solution.
They also use these solutions to proactively track and review physician payments on a regular basis to ensure mandated thresholds are met before reports are due. Aggregate Spend Repository and Physician Spend Management solutions rely on a foundation of data integration, data quality, and master data management (MDM) software to better manage physician information.
For those manufacturers who want to avoid the risk of losing valuable physician relationships, paying hefty fines, and receiving scrutiny from CMS and negative media attention, here are three steps to ensure accurate physician information:
- Bring all your scattered physician information, including identifiers, addresses and specialties into a central place to fix incorrect, missing or inconsistent information and uniquely identify each physician.
- Identify connections between physicians and the hospitals and clinics where they work to help aggregate accurate payment information for each physician.
- Standardize transaction information so it’s easy to identify the purpose of payments and related products and link transaction information to physician information.
Physicians Will Review Reports for Accuracy in January 2014
In January 2014, after physicians review the federally mandated financial disclosures, they may question the accuracy of reported payments. Within two months manufacturers will need to fix any discrepancies and file their Sunshine Act reports, which will become part of a permanent archive. Time is precious for those companies who haven’t built an Aggregate Spend Repository or Physician Spend Management solution to drive their Sunshine Act compliance reports.
If you work for one of the pharmaceutical or medical device manufacturing companies already using an Aggregate Spend Repository or Physician Spend Management solution, please share your tips and tricks with others who are behind.
Tick tock, tick tock….
Last week I described how Informatica Identity Resolution (IIR) can be used to match data from different lists or databases even when the data includes typos, translation mistakes, transcription errors, invalid abbreviations, and other errors. IIR has a wide range of use cases. Here are a few. (more…)
Just in time for Halloween, I’m sharing a scary story. Warning: this is a true story. You may wonder:
- Could this happen to me?
- Can this situation be avoided?
- How can I prevent this from happening to me?
Last summer, the worst wildfire in Colorado history burned hundreds of acres, 360 homes, killing two people and forcing 38,000 people to evacuate the area.
Unfortunately, it was during the Colorado wildfire that a large integrated healthcare provider with hospitals, doctors, healthcare providers and employees located throughout the United States (who shall remain nameless) realized they had a problem. They couldn’t respond in real time to the disaster by mobilizing their workforce quickly. They struggled to identify, contact and communicate with doctors, healthcare providers and employees located at the disaster area to warn them not to go to the hospital or redirect them to alternative sites where they could help.
This healthcare provider’s inability to respond to this disaster in real time was an “Aha” moment. What was holding them back was a major information problem. Because their employee information was scattered across hundreds of systems, they couldn’t pull a single, comprehensive and accurate list of doctors, healthcare providers and employees in the disaster area. They didn’t know which employees needed to be evacuated or which could be sent to assist people in other locations. So, they had to email everyone in the company.
The good news is that we’re in the process of helping them create and maintain a central location called an “employee master” built on our data integration, data quality, and master data management (MDM) software. This will be their “go-to” place for an up-to-date, complete and accurate list of employees and their contact information, such as work email, phone, pager (doctors still use them), home phone and personal email as well as their location, so they know exactly who is working where and how best to contact them.
This healthcare provider will no longer be held back by an information problem. In three months, they’ll be able to respond to disasters in real time by mobilizing their workforce quickly.
An interesting side note: Immediately before our Informatica team of experts arrived to talk to this healthcare provider about how we can help them, there was a power outage in the building. They struggled to alert the employees who were impacted. So our team personally experienced the pain of this organization’s employee information problem.
When disaster strikes, will you be ready to respond in real time? Or do you have an information problem that could hold you back from mobilizing your own employees?
I want your opinion. Are you interested in more scary stories? Let me know in the comments below. I’m thinking about making this a regular series.
When I talk to customers about dealing with poor data quality, I consistently hear something like, “We know we have data quality problems, but we can’t get the business to help take ownership and do something about it.” I think that this is taking the easy way out. Throwing your hands up in the air doesn’t make change happen – it only prolongs the pain. If you want to affect a positive change in data quality and are looking for ways to engage the business, then you should join Barbara Latulippe, Director of Enterprise Information Management for EMC and and Kristen Kokie, VP IT Enterprise Strategic Services for Informatica for our webinar on Thursday October 24th to hear how they have dealt with data quality in their combined 40+ years in IT.
Now, understandably, tackling data quality problems is no small undertaking, and it isn’t easy. In many instances, the reason why organizations choose to do nothing about data quality is that bad data has been present for so long that manual work around efforts have become ingrained in the business processes for consuming data. In these cases, changing the way people do things becomes the largest obstacle to dealing with the root cause of the issues. But that is also where you will be able to find the costs associated with bad data: lost productivity, ineffective decision making, missed opportunities, etc..
As discussed in this previous webinar,(link to replay on the bottom of the page), successfully dealing with poor data quality takes initiative, and it takes communication. IT Departments are the engineers of the business: they are the ones who understand process and workflows; they are the ones who build the integration paths between the applications and systems. Even if they don’t own the data, they do end up owning the data driven business processes that consume data. As such, IT is uniquely positioned to provide customized suggestions based off of the insight from multiple previous interactions with the data.
Bring facts to the table when talking to the business. As those who directly interact daily with data, IT is in position to measure and monitor data quality, to identify key data quality metrics; data quality scorecards and dashboards can shine a light on bad data and directly relate it to the business via the downstream workflows and business processes. Armed with hard facts about impact on specific business processes, a Business user has an easier time affixing a dollar value on the impact of that bad data. Here’s some helpful resources where you can start to build your case for improved data quality. With these tools and insight, IT can start to affect change.
Data is becoming the lifeblood of organizations and IT organizations have a huge opportunity to get closer to the business by really knowing the data of the business. While data quality invariably involves technological intervention, it is more so a process and change management issue that ends up being critical to success. The easier it is to tie bad data to specific business processes, the more constructive the conversation can be with the Business.
Even in “good” data there is a lot of garbage. For example a person’s name. John could also be spelled as Jon or Von (I have a high school sports trophy to prove it). Schmidt could become Schmitt or Smith. In Hungarian my name is Janos Kovacs. Human beings entering data make errors in spelling, phonetics, and keypunching. We also have to deal with variations associated with compound and account names, abbreviations, nicknames, prefix & suffix variations, foreign names, and missing elements. As long as humans are involved in entering data there will be a significant amount of garbage in any database. So how do we turn this gibberish into gems of information?
Do you know how good your multichannel data is? This blog covers four business objectives when accelerating multi channel commerce and which quality of product data is needed to deliver to that and a summary of questions to ask when establishing your strategy. These questions help ecommerce managers, category managers and marketers at retailers, distributors and brand manufacturers ask the right questions on product and customer data when establishing a multi channel strategy.
The Multichannel Challenge: Availability of Relevant Information
At every customer touch point, the ready availability of product information has a profound effect on buying decisions. If your customers can’t find what they’re shopping for, don’t understand how well your product meets their needs, or aren’t confident in their choice, they won’t complete their purchase.
When customers are researching or actively online shopping for products, research says 40 is the magic number:
40 % of buyers intend to return their purchase at the time they order it.
40 % order multiple versions of a product.
40 % of all fashion product returns are the result of poor product information (Consumer electronics are 15,3%; Sources: Trusted Shops, 2012, Internet World Business 7.1.2013)
All the high-quality product data in the world is useless if an organization cannot leverage that data for quicker time to market, improved e-commerce performance, and greater customer satisfaction.
Four Business Objectives When Accelerating Multi Channel Commerce
This white paper comes with four common use cases that illustrate typical business objectives within a multichannel commerce strategy. When looking into your product information, here is a list of questions you might consider.
1. Increasing conversions and lowering return rates by ensuring that customers can access product information in an easy-to-consume form.
- Where is the flawed content coming from?
- What tools and incentives can we provide for suppliers to maintain the high quality content?
- Which data quality processes should be automated first?
- Do we need a bespoke data model to fit your requirements?
- Can we effectively use industry standards for communicating with suppliers (such as GS1 or eClass)?
2. Lowering manual processing costs by merging the best product content from multiple suppliers.
- How many product catalogs do we have and what are the processes that slow us down?
- Who is responsible for the quality of the product information?
- How can we define and enforce the objective and measurable policies?
- Which supplier has best descriptions / certain translation, high-quality images / video / etc.?
- How do we collaborate with our large and small suppliers to achieve best data quality?
3. Growing margins through “long tail” merchandising of a broader assortment of products.
- Can we automate product classification?
- Which taxonomy will work best for us?
- Do all stakeholders have visibility of data quality metrics and trends?
- How can we leverage information across all channels and customer touch points, not only ecommerce?
4. Increasing customer satisfaction through more consistent information and corporate identity across sales channels.
- How should we connect customer and product information to provide personalized marketing?
- How can we leverage supplier and location data for regional marketing?
- How do we enable crowd sourcing of comments, reviews and user images?
- What information do internal and external users need to access in real time?
Find more information with the complete white paper on multichannel commerce and data quality.
If you haven’t already, check out the Potential at Work for Information Leaders site. We’ve just posted three great new articles designed to help you be more successful:
- “Driving value without locking down your data” Securing your data doesn’t mean inhibiting its use – far from it. Did you know that effective data masking practices allow information leaders to optimize the value data delivers to the organization while ensuring its security? Some forward-thinking information leaders are doing this and getting great results.
- “How fresh is your data?” Simply delivering data is not good enough anymore. You must get it to the right people at the right time while it is still fresh enough to be useful. Find out how to do it right.
- “Turn an application data migration initiative into a data governance pilot” A data migration effort can accomplish so much more than simply transferring data. Think about using it as an opportunity to improve the quality of existing data and apply new, higher standards to the information powering your organization.
Don’t miss out on topics that are key to your success. Please join the Potential at Work for Information Leaders community today. Available in nine languages, this site will continue to feature fresh, new ideas to promote the value of information management from a variety of top technology leaders.Sign up now!
In recent conversations regarding solutions to implement for data privacy, our Dynamic Data Masking team put together the following table to highlight the differences between encryption / tokenization and Dynamic Data Masking (DDM). Best practices dictate that both should be implemented in an enterprise for the most comprehensive and complete data security strategy. For the purpose of this blog, here are a few definitions:
Dynamic Data Masking (DDM) protects sensitive data when it is retrieved based on policy without requiring the data to be altered when it is stored persistently. Authorized users will see true data, unauthorized users will see masked values in the application. No coding is required in the source application.
Encryption / tokenization protects sensitive data by altering its values when stored persistently while being able to decrypt and present the original values when requested by authorized users. The user is validated by a separate service which then provides a decryption key. Unauthorized users will only see the encrypted values. In many cases, applications need to be altered requiring development work.
|Business users access PII||Business users work with actual SSN and personal values in the clear (not with tokenized values). As the data is tokenized in the database, it needs to be de-tokenized every time it is accessed by users – which is done be changing the application source-code (imposing costs and risks), and causing performance penalty.For example, if a user needs to retrieve information on a client with SSN = ‘987-65-4329’, the application needs to de-tokenize the entire tokenized SSN column to identify the correct client info – a costly operation. This is why implementation scope is limited.||As DDM does not change the data in the database, but only masks it when accessed by unauthorized users, authorized users do not experience any performance hit nor require application source-code changes.For example, if an authorized user needs to retrieve information on a client with SSN = ‘987-65-4329’, his request is untouched by DDM. As the SSN stored in the database is not changed, there is no performance penalty involved.In case an unauthorized user retrieves the same SSN, DDM masks the SQL request, causing the sensitive data result (e.g., name, address, CC and age) to be masked, hidden or completely blocked.|
|Privileged Infrastructure DBA have access to the database server files||Personal Identifiable Information (PII) stored in the database files is tokenized, ensuring that the few administrators that have uncontrolled access to the database servers cannot see it||PII stored in the database files remains in the clear. The few administrators that have uncontrolled access to the database servers can potentially access it.|
|Production support, application developers, DBAs, consultants, outsource and offshore teams||These groups of users have application super-user privileges, seen by the tokenization solution as authorized, and as such access PII in the clear!!!||These users are identified by DDM as unauthorized, and as such are masked, hidden or blocked, protecting the PII.|
|Data warehouse protection||Implementing tokenization on Data warehouses requires tedious database changes and causes performance penalty:1.Loading or reporting upon millions of PII records requires to tokenize/de-tokenize each record.2.Running a report with a condition on a tokenized value (e.g., when having a condition: SSN like (‘%333’) causes the de-tokenization of the entire column).
Massive database configuration changes are required to use the tokenization API, creating and maintaining hundreds of views.
|No performance penalty.No need to change reports, databases or to create views.|
Combining both DDM and encryption/tokenization presents an opportunity to deliver complete data privacy without the need to alter the application or write any code.
Informatica works with its encryption and tokenization partners to deliver comprehensive data privacy protection in packaged applications, data warehouses and Big Data platforms such as Hadoop.
During Informatica World in early June, we were excited to announce our new Potential at Work Community. You can read Jakki Geiger’s blog introducing the Community to learn more about the goals for this great resource. (more…)