Tag Archives: data

Counterparty Data and the Legal Entity Identifier (LEI) System

In the second of two videos, Peter Ku, Director of Financial Services Solutions Marketing, Informatica, talks about the latest trends regarding counterparty data and how the legal entity identifier (LEI) system will impact banks across the globe.

Specifically, he answers the following questions:

- What are the latest trends regarding counterparty information and how will the legal entity identifier system impact banks across the globe?

- How does Informatica help solve the challenges regarding counterparty information and help banks prepare for the new legal entity identifier system?

 

 

Also watch Peter’s first video (http://youtu.be/KvyDPzOTnUY) to learn about counterparty data and its challenges.

FacebookTwitterLinkedInEmailPrintShare
Posted in Financial Services | Tagged , , , , , , , | Leave a comment

Know Thy Customer

There has been much discussion, particularly in the UK, about banks restricting the use of their investment and retail arms. The thinking process behind this is that investment banking is much riskier and so by drawing a clear line between the two, consumers will be better protected if another financial crisis should hit. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Customer Services, Customers, Financial Services | Tagged , , , , , , , | Leave a comment

Dating With Data: Part 4 In Hadoop Series

eHarmony, an online dating service, uses Hadoop processing and the Hive data warehouse for analytics to match singles based on each individual’s “29 Dimensions® of Compatibility”, per a a June 2011 press release by eHarmony and one its suppliers, SeaMicro. According to eHarmony, an average of 542 eHarmony members marry daily in the United States. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data | Tagged , , , , , , , | 2 Comments

The Holy Grail Of Data Quality – Linking Data Quality To Business Impact

“We have 20% duplicates in our data source”. This is how the conversation began. It was not that no one cared about the level of duplicates, it’s just that the topic of duplicate records did not get the business excited – they have many other priorities (and they were not building a single view of customer).

The customer continued the discussion thread on how to make data quality relevant to each functional leader reporting to C-level executives. The starting point was affirmation that the business really only care about data quality when it impacts the processes that they own e.g. order process, invoice process, shipping process, credit process, lead generation process, compliance reporting process, etc. This means that data quality results need to be linked to the tangible goals of each business process owner to win them over as data advocates. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality, Profiling, Scorecarding | Tagged , , , , , , , | 1 Comment

Informatica And EMC – So What’s The Big Idea?

Last month, Informatica and EMC announced a strategic partnership at EMC’s annual user conference in Boston.  This is a significant new relationship for both companies-which in itself is interesting.  You would have thought that the company responsible for storing more data than just about anybody in the world and the company responsible for moving more data than anybody in the world would have come together many years ago.  So why now?  What’s different?

Virtualization changes everything.   Customers have moved beyond virtualizing their infrastructure and their operating systems and are now trying to apply the same principles to their data.  Whether we’re moving the data to the processing, or the processing to the data, it’s clear where data physically lives has become increasingly irrelevant.   Customers want data as a service and they don’t want to be hung up on the artificial boundaries created by applications, databases, schemas, or physical devices. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Application Retirement, Data Integration, Database Archiving, Master Data Management | Tagged , , , | Leave a comment

Dear CEO: Information As A Differentiator

I enjoyed reading Jill Dyché’s recent blog on a BI team’s letter to the CEO of one of her pharmaceutical clients. According to Jill, it “outlined how much money the company could save by pushing out accurate physician spend figures; made the case for integrating R&D data; and outlined the strategic initiatives that would be BI-enabled. It was also specific about new resource needs, technology upgrade costs, and why they were part of a larger vision for an information-driven enterprise.” Her client led a meeting with the CEO, the CIO, the VP of Sales and Marketing leaning on their letter and succeeded in getting a renewed commitment from the executive team, including a 30% budget increase.

The notion that information is truly the differentiator is permeating through to the executive ranks. Of course your application and infrastructure must run smoothly with optimized processes. Many organizations are at parity there. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration | Tagged | Leave a comment

The Importance of Data for State Spend Regulation

With so many state-level governments adopting legislation limiting or mandating disclosure of payments to physicians, spend compliance is now top of mind for many pharmaceutical companies. Keeping pace with the varied and often conflicting requirements across states has always been difficult. But as momentum for wide-ranging healthcare reform increases, so has talk among stakeholders in Washington and around the industry heated up around creating standards for complying with and enforcing these types of requirements.

Predictably, the idea of national standards for physician spend regulation has both supporters and detractors. Supporters point to the effort required to stay abreast of ever-changing and varied regulation, and the IT cost to support reporting requirements. Detractors decry national regulation as a political power grab rather than a tool for effective social policy, and complain that the general public does not understand the cost of general research and development or the limited profit windows posed by short patent life.

These arguments came to a head last month as Massachusetts introduced the broadest physician-spend law yet. What’s new and different about the Commonwealth’s law? For starters, it’s the first legislation in the U.S. applying to medical device manufacturers as well as pharmaceutical companies. Secondly, the law seems to have teeth that have been missing in the laws of other states, providing for a $5,000 per-incident fine. Previously, the largest public settlement on record was only $10,000 total.

Given the degree of uncertainty about the future of physician-spend legislation, the only certain course of action is to build a reliable, integrated source of physician data that can easily cross-reference to various AP, expense reporting, ERP, and CTMS systems. While reporting requirements will continue to morph, putting in place a reliable data foundation will allow you to rapidly respond to these changes as they occur. There are certain ground rules to follow, however. A strong data foundation must plug equally well into your BI environment, where the bulk of regulatory reporting ill likely occur, as it does into the operational systems that you use to alert personnel to spend limits.

How to go about creating such a strong data foundation? It might not be quite as difficult as it sounds. Read on to discover how one pharma company is using master data management to get a handle on their physician spend management.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , , , | Leave a comment

Transforming MDM Data for Downstream Systems

Putting an MDM Hub in place can help make system integration easier and less costly.  The reason behind this is simple: a system integration strategy that relies on a single data model, to which all the systems involved refer, is much less complex than one that is based on point-to-point integration.  The Hub becomes a sort of Rosetta Stone, and the data model on which it’s based is the target of the system integration.  Getting data into the hub – transforming it, cleansing it, standardizing it – is a fundamental part of any Siperian implementation.  Getting the data out of the hub and into downstream systems is equally important in order for them to benefit from the corrected information and to allow business users to leverage the data operationally.
 
The biggest hurdle to overcome in getting golden copy data into the systems that require it is authoring and maintaining the transformation definitions.  Regardless of transport mechanism – batch, JMS, SOAP, etc – the data will have to be transformed so that the format and values are usable by the receiving system.  If the number of receiving systems is large and heterogeneous – and in some cases there are hundreds or even a few thousand such systems – then creating the transformation definitions and keeping them up-to-date is a substantial task.  While there is no silver bullet that will magically solve the problem with a single click, there are some tools and techniques that can help decrease the effort and cost required:
 
Use a Transformation Discovery Tool: These tools work by first profiling the source and the target data stores (which means you have to have some data in your Hub, as well as in the destination).  After profiling, they look for what are called “binding conditions”.  A binding condition ties together a set of attributes in the source system with a set of attributes in the target system, with a high probability of representing the same information.  Once a binding condition is defined, the tool determines the logic that will transform the source data into the destination.  The output varies with the tool, but is usually either expressed in pseudo-code, or in SQL.  When the number of downstream systems is high, and especially if the data model is complex, using a tool to help define the “guts” of the transformations can save a tremendous amount of time and money, when compared to doing it by hand.
 
Have a Single Transformation Platform: This may seem obvious, but a surprising number of system integration efforts end up being implemented piecemeal – each receiving system implements its own set of adapters in whatever language and using whatever application-specific tools it has on hand.  This has the extremely undesirable effect of scattering the transformation logic throughout an organization, which makes maintenance and management of the integration a nightmare.  To avoid this, keep all of the transformation in a single platform, with a single authoring environment, and preferably a single execution environment.  Not only will this greatly decrease the complexity and cost of maintaining the downstream data syndication, it will also provide the possibility of reusing transformation and validation logic whenever possible.

Once the golden copy arrives at its downstream destination, the operational system can leverage it through the normal application interfaces and processes. There are some who might ask if it is appropriate to update the downstream records, or if alternative business processes or interfaces should be used directly against the Hub itself. All of these are good questions and once again speak to the criticality of having a Hub that can provide these options, if and when business needs dictate. Transformation is a continuous process, not only for your data, but for your business as well.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , , , , | Leave a comment

20 Years Old: The Web, Social Media And Our Reliance On “Data”

I was reading an article from CNET last week about the web.  The title was “It was 20 years ago today:  the web“.  The crux of the article was to discuss the impact of that famous research report authored by Tim Berners-Lee called “Information Management:  A proposal”.

What amazes me is how far we have come…

Today the web is everywhere.  It has made the world flat – something that ancient civilizations believed, but something now being delivered electronically through the ether.  Combine that with the improvements in telecommunications and we have a world in which data is king.

It is the lifeblood of every organization and is pumped across networks, through supply chains and between organizations.  Data rules, it is all-powering, it is the currency of the 21st century, and it is the web that has released it to grow to it’s full potential.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration | Tagged , , | Leave a comment