Tag Archives: CDI

MDM Enables Basel II Compliance

In the wake of the financial sector meltdown of late 2008 and the ensuing broader economic downturn, the banking industry is sure to face a far more stringent regulatory environment in the years ahead. On that score, no compliance challenge is more important right now than Basel II.

In 2005 the Federal Reserve Bank along with other U.S. banking regulators outlined an implementation transition period running from 2008 to 2011. To comply with Basel II requirements in the coming years, institutions need to develop a series of capabilities around data visibility and reporting. Namely, banks must be able to:

• Aggregate credit exposure at multiple levels, including legal entity, counterparty (or party to a contract) and business unit.
• Uniquely identify counterparties at the legal entity level and rate them using the banks’ internally developed rating models.
• Identify and link all credit exposures to individual counterparties, and then aggregate this information based on legal entity hierarchy.

Master data management (MDM) can set banks up to nimbly cope both with the Basel II issues they face today. For instance, to effectively comply with Basel II requirements, banks need to accurately aggregate counterparties so that all credit exposures can be linked to already identified counterparties. MDM makes it easy to create these linkages. Good risk analysis depends on good quality data, and on the availability of relevant counterparty data for effective analysis and modeling. Again, advanced MDM solutions provide a framework to capture and maintain data quality business rules, so counterparties can be uniquely identified at the legal entity levels, and hierarchies/families of legal entities can be identified so that risk exposures can be rolled up to the parent. Read how one financial services company complies with Basel II.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , | Leave a comment

How to Solve the Reference Data Problem using MDM (Part 2 of 2)

In my first post on the subject, I began exploring the underlying reasons why reference data management is so important for effective sales strategies and new business efforts. In this second post, let’s take a look at some real-world scenarios. Consider a setting in which a group of marketing analysts from a retail firm want to run a report to discover the overlap of customers across various channels such as in-store sales (POS), web sales and mail order sales. The methodology would require populating a data mart with information from each of these channels, and perhaps augmented with information from the company’s account systems and customer relationship management system. In order for the business intelligence applications to slice and dice the data from the various systems, reference data conflicts between each of the sources would need to be resolved first. It is impossible to make apples-to-apples comparisons when your business intelligence application is working with fundamentally dissimilar data sets.

The issues surrounding system diversity and proliferation make reference data a concern in operational settings also. Imagine this same retailer plans to open a series of new stores. Before the doors can open the retailer needs to create a whole new set of reference data for the new locations, and this information needs to be entered or replicated into all the other systems the company relies on to support critical operations: point of sale, supply chain, ERP and so forth. These are the systems that support just-in-time inventory, real-time transactional processing, and many other processes that drive consumer retail operations today. If the new reference codes aren’t in place when the doors open, the retailer would not be able to process purchases, track inventory, or perform many of the critical IT actions and procedures critical to the business.

The reality for large IT organizations today is that ripping out legacy systems and replacing them with new integrated systems is simply not a workable or affordable solution. Which is why it is imperative that organizations have the ability to easily integrate installed systems, create composite applications and integrate new systems into current environments—and have a more streamlined process for managing these shared data assets. Siperian MDM Hub gives IT administrators, data stewards and project leaders exactly these capabilities. Specifically, it provides complete control over critical reference data management processes:
1. Reference data creation
2. Reference data mapping from various sources
3. Reference data workflow and collaboration
4. Hierarchical reference data management
5. Inbound reference data resolution
6. Outbound reference data resolution
7. Reference data services

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , | 2 Comments

Better Trade Promotion – It’s In The Data

In my previous post, I cited a recent AMR Research report, Trade Promotions: Are You Getting What You Pay For?, to make the point that business-as-usual is no longer adequate when it comes to CPG industry trade promotion efforts. What decision-makers are realizing is that the key to managing TPM programs in a more cost-effective and profitable way lies not in systems, but in the data itself. Indeed, what good are analytics if your data is incomplete, inaccurate, outdated, duplicated, and unrelated? In business terms, if you’re going to make the right decisions on which promotions to run next year, you need to be able to trust the data from last year’s reports.

Clearly the way forward lies in integrated data. The most effective way for sales and marketing executives to decide which programs to run is by identifying the correlation between sales results and aggregated information from previous trade promotion initiatives (including pricing, brand and demand metrics, plus competitive analysis from third party vendors, e.g., IRI & ACNielsen). Likewise, accurate insight into past performance helps enormously in predicting future trends and maximizing results. If you can recognize, relate, and resolve customer and product data across distributed systems, then improved sales planning and demand forecasting becomes possible. With the right framework, trade promotions can be analyzed across the full value chain.

For CPG companies that “right framework” is a master data management implementation. Master data management (MDM), long recognized a strategic business driver, enables organizations to unify and consolidate data about their customers, brands, pricing, and their distribution networks. MDM is particularly effective at integrating data that is fragmented across different systems and offline tools such as TPM, ERP, planning, CRM, financial systems, and yes, ad hoc spreadsheet files.

Additionally, MDM is effective for consolidating information from external data providers, such as IRI, TDLinx, and ACNielsen. Reconciling internal and external information from the field into a single repository for analysis improves decision-making and facilitates tracking of promotions across all functional areas. Through the creation of a centralized master data hub, CPG organizations can deliver the most reliable, complete views of key business data within their existing business processes and, more importantly, leverage these data assets within analytical business processes to improve trade promotion efforts. Return on investment in trade promotion is notoriously weak, with AMP reporting that fewer than 30 percent of trade promotion programs are profitable. CPG companies adopting MDM solutions can dramatically improve their TPM success rates because MDM drives boosts trade promotion effectiveness  in concrete ways:

• Accurate customer data improves performance within operational systems and legacy applications for better planning, tracking, and measuring.

• Clearer visibility into a unified distribution network improves supply and demand efficiencies.

• Cleansed, consolidated customer master data helps improve demand planning, forecasting, and account management, which in turn lowers invoice disputes, write-offs, order defects, and returned shipments.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , , | Leave a comment

The Reference Data Problem (Part 1 of 2)

Business depends on information. Effective sales strategies, new business efforts and many day-to-day operations hinge on the ability to draw information from a variety of sources to obtain needed answers. But many companies have difficulty bringing together integrated views of data from the full diversity of sources across the enterprise. The problem is most often depicted as stemming from outdated IT infrastructure, or incompatible applications and data sources. In fact, lack of visibility into enterprise data is best understood as a reference data management challenge.

Commonly referred to as “code tables” or “lookup tables,” reference data provides context for or categorizes data within the database, or even information outside the database. Basic pieces of information such as time zones, geographical information, zip codes and currency designations are typical reference data, though the category also includes such things as chart of accounts, financial business unit hierarchies, legal entities, accounting master data, financial reference data and class of trade information. The unifying thread being that reference data creates a detailed framework within which the enterprise can record and understand transactional information as it changes over time.

The reason that reference data poses a challenge to visibility into important business information is that databases often have distinct reference data structures (see chart). In order to combine, share or search for data across multiple databases, any conflicts between the various reference data sets must be resolved first. Reference data management in small-scale, point-to-point integration projects is generally not a major concern; resolving conflicts can be time-consuming and labor-intensive, but is definitely solvable. Where the situation becomes problematic is when organizations need to integrate larger and larger numbers of underlying systems, or incorporate new systems into an existing infrastructure.

Not surprisingly, the reference data problem crops up most often in very large companies that have heterogeneous operations that came about through acquisitions or through rapid organic growth. Within today’s Fortune 500-sized organizations, where it is not at all unusual for more than one hundred customer data sources and systems to be in use, reference data management can be a critical showstopper when it comes to cross-system integration or the creation of new systems relying on divergent data sources. This holds true with both analytical and operational integration. In my next post I will explore how firms today are making creative use of master data management technologies to solve their reference data management problems.

Simple Code Look-up

Source System

Code Value

Code Description

CRM

US

UNITED STATES OF AMERICA

ERP

223

USA

POS

138

UNITED STATES

CRM

DK

DENMARK, KINGDOM OF

ERP

335

DENMARK

POS

7

DANMARK

 

The reference data problem stems from the fact that different systems use different codes for the same information.


FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , | 3 Comments

The Importance of Data for State Spend Regulation

With so many state-level governments adopting legislation limiting or mandating disclosure of payments to physicians, spend compliance is now top of mind for many pharmaceutical companies. Keeping pace with the varied and often conflicting requirements across states has always been difficult. But as momentum for wide-ranging healthcare reform increases, so has talk among stakeholders in Washington and around the industry heated up around creating standards for complying with and enforcing these types of requirements.

Predictably, the idea of national standards for physician spend regulation has both supporters and detractors. Supporters point to the effort required to stay abreast of ever-changing and varied regulation, and the IT cost to support reporting requirements. Detractors decry national regulation as a political power grab rather than a tool for effective social policy, and complain that the general public does not understand the cost of general research and development or the limited profit windows posed by short patent life.

These arguments came to a head last month as Massachusetts introduced the broadest physician-spend law yet. What’s new and different about the Commonwealth’s law? For starters, it’s the first legislation in the U.S. applying to medical device manufacturers as well as pharmaceutical companies. Secondly, the law seems to have teeth that have been missing in the laws of other states, providing for a $5,000 per-incident fine. Previously, the largest public settlement on record was only $10,000 total.

Given the degree of uncertainty about the future of physician-spend legislation, the only certain course of action is to build a reliable, integrated source of physician data that can easily cross-reference to various AP, expense reporting, ERP, and CTMS systems. While reporting requirements will continue to morph, putting in place a reliable data foundation will allow you to rapidly respond to these changes as they occur. There are certain ground rules to follow, however. A strong data foundation must plug equally well into your BI environment, where the bulk of regulatory reporting ill likely occur, as it does into the operational systems that you use to alert personnel to spend limits.

How to go about creating such a strong data foundation? It might not be quite as difficult as it sounds. Read on to discover how one pharma company is using master data management to get a handle on their physician spend management.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , , , | Leave a comment

The missing data link for trade promotion effectiveness

Let me start with a few startling statistics around trade promotion management (TPM) that AMR Research has collected over the years:

• The average consumer packaged goods (CPG) company plans to spend $6M on TPM projects in 2009.
• CPG companies plan to spend 14% of their revenue on trade promotions.
• Fewer than 30% of trade promotion programs are profitable.
• Only 30% of CPG firms measure their promotional results

These numbers reveal two important things. First, CPG companies spend a lot on trade promotions. Second, even with an all time high spend, these companies are not seeing the intended return. It’s clear that trade promotions, for the most part, continue to be ineffective and unprofitable. Not surprisingly CPG companies are now looking for ways to remedy the situation. The question, though, is if low return on investment from TPM investment has been a long-standing problem, why haven’t CPG companies fixed it already?

In fact, many companies have tried to improve trade promotion effectiveness through various means, but few if any have attained this goal. The reasons vary, but the culprit in most cases is rooted in systems infrastructure. Most CPG companies have built up a complete set of applications in support of trade promotions over the years. Even so, few firms have a holistic view of their customers, products and the related distribution network that supports their promotion planning, forecasting and integrated analytics. Lack of integrated master data is the missing link here.

Indeed, what good are your analytics if your data is incomplete, inaccurate, outdated, duplicated and unrelated? More importantly still, how can you decide which promotions to run next year if you can’t trust the data used for your reports? What CPG decision-makers are realizing now is that the key to redirecting TPM programs in a more effective and profitable direction is the data itself.

If you can effectively recognize, relate and resolve customer and product data across distributed systems it then becomes possible to improve sales planning and demand forecasting. You can create the right framework for analyzing trade promotions across the full value chain. The way to make this all happen is to implement a master data management solution, and I’ll discuss exactly how to go about that in my next post.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , , , , | Leave a comment

Seven Ways to Reduce IT Costs with Master Data Management

Information Technology managers face a dilemma given the current economic climate: budgets are being cut, yet there’s no tolerance for decreases in IT service levels. Under such circumstances, how do you maintain or improve service levels, and continue to run the business efficiently? Smart IT decision-makers are seeking out technology investments that can help to accelerate cost reductions while streamlining business processes. Master Data Management (MDM) is exactly this kind of investment.

MDM ensures that critical enterprise data is validated as correct, consistent and complete when it is circulated for consumption by business processes, applications or users. But not all MDM technologies can provide these benefits. Only an integrated, model-driven, and flexible MDM platform with easy configurability can provide rapid time-to-value and lower total cost of ownership. Consider the ways a flexible MDM platform can reduce the following costs:

#1: Interface costs
Simplify expensive business processes that rely on point-to-point integrations by centralizing common information.

#2: Redundant third-party data costs
Eliminate duplicate data feeds from external data providers (Dunn & Bradstreet, Reuters, etc.).

#3: Data cleanup costs
Integrate data from disparate applications into a central MDM system, making it possible to cleanse all data across the enterprise in a single system.

#4: Outsourced cleansing costs
Eliminate the need for outsourced manual cleansing by automatically cleansing, enriching and deduplicating data on an ongoing basis, then centrally storing it for future use.

#5: License, support and hardware costs
Centralize data across the enterprise to reduce the amount of, or even eliminate, redundant data stores and systems.

#6: Custom solution development and maintenance costs
Replace antiquated custom masters with a configurable off-the-shelf MDM platform to save significant development and maintenance costs associated with band-aiding custom solutions.

#7: Information delivery costs
Eliminate reporting errors, expedite audits and improve compliance by a) managing a single version of the truth along with a history of all changes, and b) delivering this information to any reporting, business intelligence or data warehouse.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , | 1 Comment

Transforming MDM Data for Downstream Systems

Putting an MDM Hub in place can help make system integration easier and less costly.  The reason behind this is simple: a system integration strategy that relies on a single data model, to which all the systems involved refer, is much less complex than one that is based on point-to-point integration.  The Hub becomes a sort of Rosetta Stone, and the data model on which it’s based is the target of the system integration.  Getting data into the hub – transforming it, cleansing it, standardizing it – is a fundamental part of any Siperian implementation.  Getting the data out of the hub and into downstream systems is equally important in order for them to benefit from the corrected information and to allow business users to leverage the data operationally.
 
The biggest hurdle to overcome in getting golden copy data into the systems that require it is authoring and maintaining the transformation definitions.  Regardless of transport mechanism – batch, JMS, SOAP, etc – the data will have to be transformed so that the format and values are usable by the receiving system.  If the number of receiving systems is large and heterogeneous – and in some cases there are hundreds or even a few thousand such systems – then creating the transformation definitions and keeping them up-to-date is a substantial task.  While there is no silver bullet that will magically solve the problem with a single click, there are some tools and techniques that can help decrease the effort and cost required:
 
Use a Transformation Discovery Tool: These tools work by first profiling the source and the target data stores (which means you have to have some data in your Hub, as well as in the destination).  After profiling, they look for what are called “binding conditions”.  A binding condition ties together a set of attributes in the source system with a set of attributes in the target system, with a high probability of representing the same information.  Once a binding condition is defined, the tool determines the logic that will transform the source data into the destination.  The output varies with the tool, but is usually either expressed in pseudo-code, or in SQL.  When the number of downstream systems is high, and especially if the data model is complex, using a tool to help define the “guts” of the transformations can save a tremendous amount of time and money, when compared to doing it by hand.
 
Have a Single Transformation Platform: This may seem obvious, but a surprising number of system integration efforts end up being implemented piecemeal – each receiving system implements its own set of adapters in whatever language and using whatever application-specific tools it has on hand.  This has the extremely undesirable effect of scattering the transformation logic throughout an organization, which makes maintenance and management of the integration a nightmare.  To avoid this, keep all of the transformation in a single platform, with a single authoring environment, and preferably a single execution environment.  Not only will this greatly decrease the complexity and cost of maintaining the downstream data syndication, it will also provide the possibility of reusing transformation and validation logic whenever possible.

Once the golden copy arrives at its downstream destination, the operational system can leverage it through the normal application interfaces and processes. There are some who might ask if it is appropriate to update the downstream records, or if alternative business processes or interfaces should be used directly against the Hub itself. All of these are good questions and once again speak to the criticality of having a Hub that can provide these options, if and when business needs dictate. Transformation is a continuous process, not only for your data, but for your business as well.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , , , , , | Leave a comment

Back to the Future, your MDM Point of “View”

When Michael J Fox gets into the DeLorean and heads back to 1955 in the first installment of the classic movie Back to the Future, he gets a rude awakening when he sees what a wimp his dad is. With the DeLorean time machine equipped with the Flux capacitor he was able to participate in the past, ultimately defeating Biff and making some serious changes. The result was that when he returned to the present, things were dramatically different and as it happens, very much in his favor.

With MDM data stewards are often challenged with examining the history of a golden record and determining what combination of matches and merges, or how the changes from various contributing sources resulted in the combined data being presented today. This capability is not only important for audit and compliance purposes, but it has serious ramifications if merges were executed, automatically or manually, resulting in an incorrect golden record.

Take a classic example in the life sciences industry, where many MDM systems are not equipped to handle a common scenario whereby a father and son work together as physicians in a shared practice. Their names are Dr. John Smith and Dr. John Smith Jr. operating (pun intended) at the same address with no further distinguishing identifiers. Without fine grained match rules to specifically handle these instances, these two unique records might get merged together by inferior MDM systems. The result, Jr. gets absorbed into Dad’s info or vice versa. Furthermore, the MDM system may not be able to provide the necessary audit and history trail to allow an “undo” correctly, since many other activities may have taken place since the actual consolidation event.

What would Marty McFly and Dr. Emmett Brown do in this case? They would probably fire up the DeLorean and arrive back at the exact point in which the original merge took place and they would be able to prevent the records from ever merging, thereby altering history and the present state. Unfortunately, for MDM it isn’t that “simple”, because additional valid activities may have taken place to contributing records which may want to be preserved. So changing history is not the right thing to do. The correct MDM, and compliant way, is to determine where the merge took place. Ideally using a point of "view" visual timeline to navigate to the relevant point-in-time details, perform the necessary analysis and apply the correct unmerge. All while preserving all the historical updates made to the individual records where appropriate, from the comfort of a desk, alleviating the need to hop into a DeLorean and risk electrocution by trying to harness a lightening bolt from a clock tower

Now if we could only travel “into the future” by running match and merge scenarios to see how specific combinations would result in greater relationship visibility (more about this in my next post). If that were possible then just like Back to the Future, MDM is destined to be a “timeless” classic. 

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , | Leave a comment

Stimulate your Business with an MDM Bailout

If you Google the term “bailout”, you’ll get over 28M results. As a sign of our times, the term that was initially limited to finance and auto industries has been adopted as a universal plea for “rescue”. So it is time perhaps for offering businesses an “MDM bailout”.

 

After reeling from precipitous sales declines, companies are chasing ever decreasing cost targets. And if you are among the industries targeted for increased governmental scrutiny, your business teams are looking to meet new federal and local compliance regulations. Believe it or not, this offers an excellent opportunity for IT teams to step up and offer an “MDM bailout plan” to their business counterparts: even though IT departments have seen their budgets slashed.

 

Here’s why. At an enterprise level, companies are seeing significant merger and acquisition activity, such as in Life Sciences industry. This means there is a strong need to consolidate customer segments and product lines rapidly to realize M&A cost savings. In other industries like Financial Services, increased regulation and compliance requirements are mandating frequent reporting and greater transparency into business transactions. So, an “MDM bailout plan” targeted at extracting cost savings within your company or at delivering regulatory compliance should be well received in this economic climate.

 

So here is a situation where a trillion or two is not needed to make a difference. With targeted, rapidly deployed “MDM bailout”, IT organizations have an opportunity to help improve their companies’ bottom line.

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management | Tagged , , , , , , , | Leave a comment