Category Archives: B2B Data Exchange

Data Obfuscation and Data Value – Can They Coexist?

Data Obfuscation and Data Value

Data Obfuscation and Data Value

Data is growing exponentially. New technologies are at the root of the growth. With the advent of big data and machine data, enterprises have amassed amounts of data never before seen. Consider the example of Telecommunications companies. Telco has always collected large volumes of call data and customer data. However, the advent of 4G services, combined with the explosion of the mobile internet, has created data volume Telco has never seen before.

In response to the growth, organizations seek new ways to unlock the value of their data. Traditionally, data has been analyzed for a few key reasons. First, data was analyzed in order to identify ways to improve operational efficiency. Secondly, data was analyzed to identify opportunities to increase revenue.

As data expands, companies have found new uses for these growing data sets. Of late, organizations have started providing data to partners, who then sell the ‘intelligence’ they glean from within the data. Consider a coffee shop owner whose store doesn’t open until 8 AM. This owner would be interested in learning how many target customers (Perhaps people aged 25 to 45) walk past the closed shop between 6 AM and 8 AM. If this number is high enough, it may make sense to open the store earlier.

As much as organizations prioritize the value of data, customers prioritize the privacy of data. If an organization loses a customer’s data, it results in a several costs to the organization. These costs include:

  • Damage to the company’s reputation
  • A reduction of customer trust
  • Financial costs associated with the investigation of the loss
  • Possible governmental fines
  • Possible restitution costs

To guard against these risks, data that organizations provide to their partners must be obfuscated. This protects customer privacy. However, data that has been obfuscated is often of a lower value to the partner. For example, if the date of birth of those passing the coffee shop has been obfuscated, the store owner may not be able to determine if those passing by are potential customers. When data is obfuscated without consideration of the analysis that needs to be done, analysis results may not be correct.

There is away to provide data privacy for the customer while simultaneously monetizing enterprise data. To do so, organizations must allow trusted partners to define masking generalizations. With sufficient data masking governance, it is indeed possible for data obfuscation and data value to coexist.

Currently, there is a great deal of research around ensuring that obfuscated data is both protected and useful. Techniques and algorithms like ‘k-Anonymity’ and ‘l-Diversity’ ensure that sensitive data is safe and secure. However, these techniques have have not yet become mainstream. Once they do, the value of big data will be unlocked.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, B2B Data Exchange, Data masking, Data Privacy, Data Security, Telecommunications | Tagged , , , | Leave a comment

Health Plans, Create Competitive Differentiation with Risk Adjustment

improve risk adjustmentExploring Risk Adjustment as a Source of Competitive Differentiation

Risk adjustment is a hot topic in healthcare. Today, I interviewed my colleague, Noreen Hurley to learn more. Noreen tell us about your experience with risk adjustment.

Before I joined Informatica I worked for a health plan in Boston. I managed several programs  including CMS Five Start Quality Rating System and Risk Adjustment Redesign.  We recognized the need for a robust diagnostic profile of our members in support of risk adjustment. However, because the information resides in multiple sources, gathering and connecting the data presented many challenges. I see the opportunity for health plans to transform risk adjustment.

As risk adjustment becomes an integral component in healthcare, I encourage health plans to create a core competency around the development of diagnostic profiles. This should be the case for health plans and ACO’s.  This profile is the source of reimbursement for an individual. This profile is also the basis for clinical care management.  Augmented with social and demographic data, the profile can create a roadmap for successfully engaging each member.

Why is risk adjustment important?

Risk Adjustment is increasingly entrenched in the healthcare ecosystem.  Originating in Medicare Advantage, it is now applicable to other areas.  Risk adjustment is mission critical to protect financial viability and identify a clinical baseline for  members.

What are a few examples of the increasing importance of risk adjustment?

1)      Centers for Medicare and Medicaid (CMS) continues to increase the focus on Risk Adjustment. They are evaluating the value provided to the Federal government and beneficiaries.  CMS has questioned the efficacy of home assessments and challenged health plans to provide a value statement beyond the harvesting of diagnoses codes which result solely in revenue enhancement.   Illustrating additional value has been a challenge. Integrating data across the health plan will help address this challenge and derive value.

2)      Marketplace members will also require risk adjustment calculations.  After the first three years, the three “R’s” will dwindle down to one ‘R”.  When Reinsurance and Risk Corridors end, we will be left with Risk Adjustment. To succeed with this new population, health plans need a clear strategy to obtain, analyze and process data.  CMS processing delays make risk adjustment even more difficult.  A Health Plan’s ability to manage this information  will be critical to success.

3)      Dual Eligibles, Medicaid members and ACO’s also rely on risk management for profitability and improved quality.

With an enhanced diagnostic profile — one that is accurate, complete and shared — I believe it is possible to enhance care, deliver appropriate reimbursements and provide coordinated care.

How can payers better enable risk adjustment?

  • Facilitate timely analysis of accurate data from a variety of sources, in any  format.
  • Integrate and reconcile data from initial receipt through adjudication and  submission.
  • Deliver clean and normalized data to business users.
  • Provide an aggregated view of master data about members, providers and the relationships between them to reveal insights and enable a differentiated level of service.
  • Apply natural language processing to capture insights otherwise trapped in text based notes.

With clean, safe and connected data,  health plans can profile members and identify undocumented diagnoses. With this data, health plans will also be able to create reports identifying providers who would benefit from additional training and support (about coding accuracy and completeness).

What will clean, safe and connected data allow?

  • Allow risk adjustment to become a core competency and source of differentiation.  Revenue impacts are expanding to lines of business representing larger and increasingly complex populations.
  • Educate, motivate and engage providers with accurate reporting.  Obtaining and acting on diagnostic data is best done when the member/patient is meeting with the caregiver.  Clear and trusted feedback to physicians will contribute to a strong partnership.
  • Improve patient care, reduce medical cost, increase quality ratings and engage members.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Data Governance, Data Integration, Enterprise Data Management, Healthcare, Master Data Management, Operational Efficiency | Tagged , , | Leave a comment

An Introduction to Big Data and Hadoop from Informatica University

A full house, lots of funny names and what does it all mean?

Cloudera, Appfluent and Informatica partnered today at Informatica World in Las Vegas to deliver together a one day training session on Introduction to Hadoop and Big Data.  Technologies overview, best practices, and how to get started were on the agenda.  Of course, we needed to start off with a little history.  Processing and computing was important in the old days.  And, even in the old days it was hard to do and very expensive. 

Today it’s all about scalability.   What Cloudera does is “Spread the Data and Spread the Processing” with Hadoop optimized for scanning lots of data.  It’s the Hadoop File System (HDFS) that slices up the data.  It takes a slice of data and then takes another slice.  Map Reduce is then used to spread the processing.  How does spreading the data and the processing help us with scalability?

When we spread the data and processing we need to index the data.  How do we do this?  We add the Get Puts.  That’s Get a Row, Put a Row.  Basically this is what helps us find a row of data easily.  The potential for processing millions of rows of data today is more and more a reality for many businesses.  Once we can find and process a row of data easily we can focus on our data analysis.

Data Analysis, what’s important to you and your business?  Appfluent gives us the map to identify data and workloads to offload and archive to Hadoop.  It helps us assess what is not necessary to load into the Data Warehouse.  The Data Warehouse today with the exponential growth in volume and types of data will soon cost too much unless we identify what to load and offload. 

Informatica has the tools to help you with processing your data.  Tools that understand Hadoop and that you already use today.  This helps you with a managing these volumes of data in a cost effective way.  Add to that the ability to reuse what you have already developed.  Now that makes these new tools and technologies exciting.

In this Big Data and Hadoop session, #INFA14, you will learn:

  • Common terminologies used in Big Data
  • Technologies, tools, and use cases associated with Hadoop
  • How to identify and qualify the most appropriate jobs for Hadoop
  • Options and best practices for using Hadoop to improve processes and increase efficiency

Live action at Informatica World 2014, May 12 9:00 am – 5:00 pm and updates at:

Twitter: @INFAWorld

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Big Data | Tagged , | Leave a comment

Data archiving – time for a spring clean?

The term “big data” has been bandied around so much in recent months that arguably, it’s lost a lot of meaning in the IT industry. Typically, IT teams have heard the phrase, and know they need to be doing something, but that something isn’t being done. As IDC pointed out last year, there is a concerning shortage of trained big data technology experts, and failure to recognise the implications that not managing big data can have on the business is dangerous. In today’s information economy, as increasingly digital consumers, customers, employees and social networkers we’re handing over more and more personal information for businesses and third parties to collate, manage and analyse. On top of the growth in digital data, emerging trends such as cloud computing are having a huge impact on the amount of information businesses are required to handle and store on behalf of their customers. Furthermore, it’s not just the amount of information that’s spiralling out of control: it’s also the way in which it is structured and used. There has been a dramatic rise in the amount of unstructured data, such as photos, videos and social media, which presents businesses with new challenges as to how to collate, handle and analyse it. As a result, information is growing exponentially. Experts now predict a staggering 4300% increase in annual data generation by 2020. Unless businesses put policies in place to manage this wealth of information, it will become worthless, and due to the often extortionate costs to store the data, it will instead end up having a huge impact on the business’ bottom line. Maxed out data centres Many businesses have limited resource to invest in physical servers and storage and so are increasingly looking to data centres to store their information in. As a result, data centres across Europe are quickly filling up. Due to European data retention regulations, which dictate that information is generally stored for longer periods than in other regions such as the US, businesses across Europe have to wait a very long time to archive their data. For instance, under EU law, telecommunications service and network providers are obliged to retain certain categories of data for a specific period of time (typically between six months and two years) and to make that information available to law enforcement where needed. With this in mind, it’s no surprise that investment in high performance storage capacity has become a key priority for many. Time for a clear out So how can organisations deal with these storage issues? They can upgrade or replace their servers, parting with lots of capital expenditure to bring in more power or more memory for Central Processing Units (CPUs). An alternative solution would be to “spring clean” their information. Smart partitioning allows businesses to spend just one tenth of the amount required to purchase new servers and storage capacity, and actually refocus how they’re organising their information. With smart partitioning capabilities, businesses can get all the benefits of archiving the information that’s not necessarily eligible for archiving (due to EU retention regulations). Furthermore, application retirement frees up floor space, drives the modernisation initiative, allows mainframe systems and older platforms to be replaced and legacy data to be migrated to virtual archives. Before IT professionals go out and buy big data systems, they need to spring clean their information and make room for big data. Poor economic conditions across Europe have stifled innovation for a lot of organisations, as they have been forced to focus on staying alive rather than putting investment into R&D to help improve operational efficiencies. They are, therefore, looking for ways to squeeze more out of their already shrinking budgets. The likes of smart partitioning and application retirement offer businesses a real solution to the growing big data conundrum. So maybe it’s time you got your feather duster out, and gave your information a good clean out this spring?

FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, B2B Data Exchange, Data Aggregation, Data Archiving | Tagged , , , , , , , , , , | Leave a comment

When was Your Last B2B Integration Health Check?

If you haven’t updated your B2B integration capabilities in the past five years, are you at risk of being left behind?  This is the age of superior customer experience and rapid time-to-value so speedy customer on-boarding and support of specialized integration services means the difference between winning and losing business.  A health check starts with asking some simple questions: (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange | Tagged , , , , , , , | 2 Comments

Bring The Outside In: Why Integrating External Data Sources Should Be Your Next Data integration Project

We recently had Ted Friedman, VP Distinguished Analyst, from featured analyst firm Gartner speak about what companies can do to extend their internal data integration strategies to included integrating external data sources.  If the next generation of your DI projects includes inter-enterprise data sources, you’re in good company. He mentioned 28% of the time, data integration tools are being used for inter-enterprise integration.  That is almost the same rate and Master Data Management (MDM) and Data Services use cases.  Ted is also predicted the inter-enterprise use case will continue to grow as more integration projects will include data outside the firewall. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange | Tagged , , , , , | 1 Comment

Manual Processes Got You Down? 4 Steps to Small Partner Enablement

In a recent Aberdeen research, they found that 95% of respondents (of 122 responses) replied on some level of manual processing in order to integration external data sources.   Manual processing to integrate external data is time consuming, expensive and error prone so why do so many do it?  Well, they often have little choice. If you look deeper, most of the time these data exchanges are with small partners and small partner enablement is a significant challenge for most organizations.  For the most part, (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B Data Exchange | Tagged , , , , , | Leave a comment

Avnet Story: How to Integrate 100% of B2B Data Sources

The other week we had Ayman Taha, Director of IT for Enterprise Solutions Integration, from Avent in to talk about how he automated the processing of unstructured, non-traditional B2B exchanges with external partners.  Avent is a large distributor of electronic components with a diverse ecosystem of customer and suppliers.  Their B2B infrastructure reflects the complexity of their environment but, despite a sophisticated and mature EDI infrastructure, they were still manually re-keying invoices and product updates from hundreds of spreadsheets and .PDFs received from partners.  This was because many of their smaller customers and partners could not send or receive EDI messages. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B Data Exchange | Tagged , , | 1 Comment

How Was it for You? The Future of the Quality Experience

One major emergence from the Big Data debate especially in the Telco Industry is the sudden elevation of the focus on Customer Experience with QoE or Quality of Experience and CEM or Customer Experience management. With new and emerging technologies such as Near Field Communication (NFC), Machine to Machine (M2M) and Mobile Social Media Apps hitting the news every day like a Reality ‘Stars’ socializing antics; we are all fascinated by how much organisations either know, can find out or deduce about our lives: what we like / dislike, how much we may be worth to those organisations, what we already own and even where we physically are or will be in the next few minutes. All the minutiae of our lives and personalities laid bare to be pawed over, analysed and used to control us and eventually sell us yet more ‘stuff’. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Big Data, Telecommunications, Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment