Tag Archives: Master Data Management
“Raw materials costs are the company’s single largest expense category,” said Steve Jenkins, Global IT Director at Valspar, at MDM Day in London. “Data management technology can help us improve business process efficiency, manage sourcing risk and reduce RFQ cycle times.”
Valspar is a $4 billion global manufacturing company, which produces a portfolio of leading paint and coating brands. At the end of 2013, the 200 year old company celebrated record sales and earnings. They also completed two acquisitions. Valspar now has 10,000 employees operating in 25 countries.
As is the case for many global companies, growth creates complexity. “Valspar has multiple business units with varying purchasing practices. We source raw materials from 1,000s of vendors around the globe,” shared Steve.
“We want to achieve economies of scale in purchasing to control spending,” Steve said as he shared Valspar’s improvement objectives. “We want to build stronger relationships with our preferred vendors. Also, we want to develop internal process efficiencies to realize additional savings.”
Poorly managed vendor and raw materials data was impacting Valspar’s buying power
The Valspar team, who sharply focuses on productivity, had an “Aha” moment. “We realized our buying power was limited by the age and quality of available vendor data and raw materials data,” revealed Steve.
The core vendor data and raw materials data that should have been the same across multiple systems wasn’t. Data was often missing or wrong. This made it difficult to calculate the total spend on raw materials. It was also hard to calculate the total cost of expedited freight of raw materials. So, employees used a manual, time-consuming and error-prone process to consolidate vendor data and raw materials data for reporting.
These data issues were getting in the way of achieving their improvement objectives. Valspar needed a data management solution.
Valspar needed a single trusted source of vendor and raw materials data
The team chose Informatica MDM, master data management (MDM) technology. It will be their enterprise hub for vendors and raw materials. It will manage this data centrally on an ongoing basis. With Informatica MDM, Valspar will have a single trusted source of vendor and raw materials data.
Informatica PowerCenter will access data from multiple source systems. Informatica Data Quality will profile the data before it goes into the hub. Then, after Informatica MDM does it’s magic, PowerCenter will deliver clean, consistent, connected and enriched data to target systems.
Better vendor and raw materials data management results in cost savings
Valspar expects to gain the following business benefits:
- Streamline the RFQ process to accelerate raw materials cost savings
- Reduce the total number of raw materials SKUs and vendors
- Increase productivity of staff focused on pulling and maintaining data
- Leverage consistent global data visibly to:
- increase leverage during contract negotiations
- improve acquisition due diligence reviews
- facilitate process standardization and reporting
Valspar’s vision is to tranform data and information into a trusted organizational assets
“Mastering vendor and raw materials data is Phase 1 of our vision to transform data and information into trusted organizational assets,” shared Steve. In Phase 2 the Valspar team will master customer data so they have immediate access to the total purchases of key global customers. In Phase 3, Valspar’s team will turn their attention to product or finished goods data.
Steve ended his presentation with some advice. “First, include your business counterparts in the process as early as possible. They need to own and drive the business case as well as the approval process. Also, master only the vendor and raw materials attributes required to realize the business benefit.”
Want more? Download the Total Supplier Information Management eBook. It covers:
- Why your fragmented supplier data is holding you back
- The cost of supplier data chaos
- The warning signs you need to be looking for
- How you can achieve Total Supplier Information Management
In my last blog I promised I would report back my experience on using Informatica Data Quality, a software tool that helps automate the hectic, tedious data plumbing task, a task that routinely consumes more than 80% of the analyst time. Today, I am happy to share what I’ve learned in the past couple of months.
But first, let me confess something. The reason it took me so long to get here was that I was dreaded by trying the software. Never a savvy computer programmer, I was convinced that I would not be technical enough to master the tool and it would turn into a lengthy learning experience. The mental barrier dragged me down for a couple of months and I finally bit the bullet and got my hands on the software. I am happy to report that my fear was truly unnecessary – It took me one half day to get a good handle on most features in the Analyst Tool, a component of the Data Quality designed for analyst and business users, then I spent 3 days trying to figure out how to maneuver the Developer Tool, another key piece of the Data Quality offering mostly used by – you guessed it, developers and technical users. I have to admit that I am no master of the Developer Tool after 3 days of wrestling with it, but, I got the basics and more importantly, my hands-on interaction with the entire software helped me understand the logic behind the overall design, and see for myself how analyst and business user can easily collaborate with their IT counterpart within our Data Quality environment.
To break it all down, first comes to Profiling. As analyst we understand too well the importance of profiling as it provides an anatomy of the raw data we collected. In many cases, it is a must have first step in data preparation (especially when our raw data came from different places and can also carry different formats). A heavy user of Excel, I used to rely on all the tricks available in the spreadsheet to gain visibility of my data. I would filter, sort, build pivot table, make charts to learn what’s in my raw data. Depending on how many columns in my data set, it could take hours, sometimes days just to figure out whether the data I received was any good at all, and how good it was.
Switching to the Analyst Tool in Data Quality, learning my raw data becomes a task of a few clicks – maximum 6 if I am picky about how I want it to be done. Basically I load my data, click on a couple of options, and let the software do the rest. A few seconds later I am able to visualize the statistics of the data fields I choose to examine, I can also measure the quality of the raw data by using Scorecard feature in the software. No more fiddling with spreadsheet and staring at busy rows and columns. Take a look at the above screenshots and let me know your preference?
Once I decide that my raw data is adequate enough to use after the profiling, I still need to clean up the nonsense in it before performing any analysis work, otherwise bad things can happen — we call it garbage in garbage out. Again, to clean and standardize my data, Excel came to rescue in the past. I would play with different functions and learn new ones, write macro or simply do it by hand. It was tedious but worked if I worked on static data set. Problem however, was when I needed to incorporate new data sources in a different format, many of the previously built formula would break loose and become inapplicable. I would have to start all over again. Spreadsheet tricks simply don’t scale in those situation.
With Data Quality Analyst Tool, I can use the Rule Builder to create a set of logical rules in hierarchical manner based on my objectives, and test those rules to see the immediate results. The nice thing is, those rules are not subject to data format, location, or size, so I can reuse them when the new data comes in. Profiling can be done at any time so I can re-examine my data after applying the rules, as many times as I like. Once I am satisfied with the rules, they will be passed on to my peers in IT so they can create executable rules based on the logic I create and run them automatically in production. No more worrying about the difference in format, volume or other discrepancies in the data sets, all the complexity is taken care of by the software, and all I need to do is to build meaningful rules to transform the data to the appropriate condition so I can have good quality data to work with for my analysis. Best part? I can do all of the above without hassling my IT – feeling empowered is awesome!
Use the right tool for the right job will improve our results, save us time, and make our jobs much more enjoyable. For me, no more Excel for data cleansing after trying our Data Quality software, because now I can get a more done in less time, and I am no longer stressed out by the lengthy process.
I encourage my analyst friends to try Informatica Data Quality, or at least the Analyst Tool in it. If you are like me, feeling weary about the steep learning curve then fear no more. Besides, if Data Quality can cut down your data cleansing time by half (mind you our customers have reported higher numbers), how many more predictive models you can build, how much you will learn, and how much faster you can build your reports in Tableau, with more confidence?
Get connected. Be connected. Make connections. Find connections. The Internet of Things (IoT) is all about connecting people, processes, data and, as the name suggests, things. The recent social media frenzy surrounding the ALS Ice Bucket Challenge has certainly reminded everyone of the power of social media, the Internet and a willingness to answer a challenge. Fueled by personal and professional connections, the craze has transformed fund raising for at least one charity. Similarly, IoT may potentially be transformational to the business of the public sector, should government step up to the challenge.
Government is struggling with the concept and reality of how IoT really relates to the business of government, and perhaps rightfully so. For commercial enterprises, IoT is far more tangible and simply more fun. Gaming, televisions, watches, Google glasses, smartphones and tablets are all about delivering over-the-top, new and exciting consumer experiences. Industry is delivering transformational innovations, which are connecting people to places, data and other people at a record pace.
It’s time to accept the challenge. Government agencies need to keep pace with their commercial counterparts and harness the power of the Internet of Things. The end game is not to deliver new, faster, smaller, cooler electronics; the end game is to create solutions that let devices connecting to the Internet interact and share data, regardless of their location, manufacturer or format and make or find connections that may have been previously undetectable. For some, this concept is as foreign or scary as pouring ice water over their heads. For others, the new opportunity to transform policy, service delivery, leadership, legislation and regulation is fueling a transformation in government. And it starts with one connection.
One way to start could be linking previously siloed systems together or creating a golden record of all citizen interactions through a Master Data Management (MDM) initiative. It could start with a big data and analytics project to determine and mitigate risk factors in education or linking sensor data across multiple networks to increase intelligence about potential hacking or breaches. Agencies could stop waste, fraud and abuse before it happens by linking critical payment, procurement and geospatial data together in real time.
This is the Internet of Things for government. This is the challenge. This is transformation.
This blog post feels a little bit like bragging… and OK, I guess it is pretty self-congratulatory to announce that this year, Informatica was again chosen as a leader in MDM and PIM by The Information Difference. As you may know, The Information Difference is an independent research firm that specializes in the MDM industry and each year surveys, analyzes and ranks MDM and PIM providers and customers around the world. This year, like last year, The Information Difference named Informatica tops in the space.
Why do I feel especially chuffed about this? Because of our customers.
“Inaccurate, inconsistent and disconnected supplier information prohibits us from doing accurate supplier spend analysis, leveraging discounts, comparing and choosing the best prices, and enforcing corporate standards.”
This is quotation from a manufacturing company executive. It illustrates the negative impact that poorly managed supplier information can have on a company’s ability to cut costs and achieve revenue targets.
Many supply chain and procurement teams at large companies struggle to see the total relationship they have with suppliers across product lines, business units and regions. Why? Supplier information is scattered across dozens or hundreds of Enterprise Resource Planning (ERP) and Accounts Payable (AP) applications. Too much valuable time is spent manually reconciling inaccurate, inconsistent and disconnected supplier information in an effort to see the big picture. All this manual effort results in back office administrative costs that are higher than they should be.
Do these quotations from supply chain leaders and their teams sound familiar?
“We have 500,000 suppliers. 15-20% of our supplier records are duplicates. 5% are inaccurate.”
“I get 100 e-mails a day questioning which supplier to use.”
“To consolidate vendor reporting for a single supplier between divisions is really just a guess.”
“Every year 1099 tax mailings get returned to us because of invalid addresses, and we play a lot of Schedule B fines to the IRS.”
“Two years ago we spent a significant amount of time and money cleansing supplier data. Now we are back where we started.”
Please join me and Naveen Sharma, Director of the Master Data Management (MDM) Practice at Cognizant for a Webinar, Supercharge Your Supply Chain Applications with Better Supplier Information, on Tuesday, July 29th at 11 am PT.
During the Webinar, we’ll explain how better managing supplier information can help you achieve the following goals:
- Accelerate supplier onboarding
- Mitiate the risk of supply disruption
- Better manage supplier performance
- Streamline billing and payment processes
- Improve supplier relationship management and collaboration
- Make it easier to evaluate non-compliance with Service Level Agreements (SLAs)
- Decrease costs by negotiating favorable payment terms and SLAs
I hope you can join us for this upcoming Webinar!
“If I use master data technology to create a 360-degree view of my client and I have a data breach, then someone could steal all the information about my client.”
Um, wait, what? Insurance companies take personally identifiable information very seriously. The statement is flawed in the relationship between client master data and securing your client data. Let’s dissect the statement and see what master data and data security really mean for insurers. We’ll start by level setting a few concepts.
What is your Master Client Record?
Your master client record is your 360-degree view of your client. It represents everything about your client. It uses Master Data Management technology to virtually integrate and syndicate all of that data into a single view. It leverages identifiers to ensure integrity in the view of the client record. And finally it makes an effort through identifiers to correlate client records for a network effect.
There are benefits to understanding everything about your client. The shape and view of each client is specific to your business. As an insurer looks at their policyholders, the view of “client” is based on relationships and context that the client has to the insurer. This are policies, claims, family relationships, history of activities and relationships with agency channels.
And what about security?
Naturally there is private data in a client record. But there is nothing about the consolidated client record that contains any more or less personally identifiable information. In fact, most of the data that a malicious party would be searching for can likely be found in just a handful of database locations. Additionally breaches happen “on the wire”. Policy numbers, credit card info, social security numbers, and birth dates can be found in less than five database tables. And they can be found without a whole lot of intelligence or analysis.
That data should be secured. That means that the data should be encrypted or masked so that any breach will protect the data. Informatica’s data masking technology allows this data to be secured in whatever location. It provides access control so that only the right people and applications can see the data in an unsecured format. You could even go so far as to secure ALL of your client record data fields. That’s a business and application choice. Do not confuse field or database level security with a decision to NOT assemble your golden policyholder record.
What to worry about? And what not to worry about?
Do not succumb to fear of mastering your policyholder data. Master Data Management technology can provide a 360-degree view. But it is only meaningful within your enterprise and applications. The view of “client” is very contextual and coupled with your business practices, products and workflows. Even if someone breaches your defenses and grabs data, they’re looking for the simple PII and financial data. Then they’re grabbing it and getting out. If the attacker could see your 360-degree view of a client, they wouldn’t understand it. So don’t over complicate the security of your golden policyholder record. As long as you have secured the necessary data elements, you’re good to go. The business opportunity cost of NOT mastering your policyholder data far outweighs any imagined risk to PII breach.
So what does your Master Policyholder Data allow you to do?
Imagine knowing more about your policyholders. Let that soak in for a bit. It feels good to think that you can make it happen. And you can do it. For an insurer, Master Data Management provides powerful opportunities across everything from sales, marketing, product development, claims and agency engagement. Each channel and activity has discreet ROI. It also has direct line impact on revenue, policyholder satisfaction and market share. Let’s look at just a few very real examples that insurers are attempting to tackle today.
- For a policyholder of a certain demographic with an auto and home policy, what is the next product my agent should discuss?
- How many people live in a certain policyholder’s household? Are there any upcoming teenage drivers?
- Does this personal lines policyholder own a small business? Are they a candidate for a business packaged policy?
- What is your policyholder claims history? What about prior carriers and network of suppliers?
- How many touch points have your agents and had with your policyholders? Were they meaningful?
- How can you connect with you policyholders in social media settings and make an impact?
- What is your policyholder mobility usage and what are they doing online that might interest your Marketing team?
These are just some of the examples of very streamlined connections that you can make with your policyholders once you have your 360-degree view. Imagine the heavy lifting required to do these things without a Master Policyholder record.
Fear is the enemy of innovation. In mastering policyholder data it is important to have two distinct work streams. First, secure the necessary data elements using data masking technology. Once that is secure, gain understanding through the mastering of your policyholder record. Only then will you truly be able to take your clients’ experience to the next level. When that happens watch your revenue grow in leaps and bounds.
Step 1: Determine if you have a customer data problem
A statement I often hear from marketing and sales leaders unfamiliar with the concept of mastering customer data is, “My CRM application is our single source of trusted customer data.” They use CRM to onboard new customers, collecting addresses, phone numbers and email addresses. They append a DUNS number. So it’s no surprise they may expect they can master their customer data in CRM. (To learn more about the basics of managing trusted customer data, read this: How much does bad data cost your business?)
It may seem logical to expect your CRM investment to be your customer master – especially since so many CRM vendors promise a “360 degree view of your customer.” But you should only consider your CRM system as the source of truth for trusted customer data if:
· You have only a single instance of Salesforce.com, Siebel CRM, or other CRM
· You have only one sales organization (vs. distributed across regions and LOBs)
· Your CRM manages all customer-focused processes and interactions (marketing, service, support, order management, self-service, etc)
· The master customer data in your CRM is clean, complete, fresh, and free of duplicates
Unfortunately most mid-to-large companies cannot claim such simple operations. For most large enterprises, CRM never delivered on that promise of a trusted 360-degree customer view. That’s what prompted Gartner analysts Bill O’Kane and Kimbery Collins to write this report, MDM is Critical to CRM Optimization, in February 2014.
“The reality is that the vast majority of the Fortune 2000 companies we talk to are complex,” says Christopher Dwight, who leads a team of master data management (MDM) and product information management (PIM) sales specialists for Informatica. Christopher and team spend each day working with retailers, distributors and CPG companies to help them get more value from their customer, product and supplier data. “Business-critical customer data doesn’t live in one place. There’s no clear and simple source. Functional organizations, processes, and systems landscapes are much more complicated. Typically they have multiple selling organizations across business units or regions.”
As an example, listed below are typical functional organizations, and common customer master data-dependent applications they rely upon, to support the lead-to-cash process within a typical enterprise:
· Marketing: marketing automation, campaign management and customer analytics systems.
· Ecommerce: e-commerce storefront and commerce applications.
· Sales: sales force automation, quote management,
· Fulfillment: ERP, shipping and logistics systems.
· Finance: order management and billing systems.
· Customer Service: CRM, IVR and case management systems.
The fragmentation of critical customer data across multiple organizations and applications is further exacerbated by the explosive adoption of Cloud applications such as Salesforce.com and Marketo. Merger and acquisition (M&A) activity is common among many larger organizations where additional legacy customer applications must be onboarded and reconciled. Suddenly your customer data challenge grows exponentially.
Step 2: Measure how customer data fragmentation impacts your business
Ask yourself: if your customer data is inaccurate, inconstant and disconnected can you:
· See the full picture of a customer’s relationship with the business across business units, product lines, channels and regions?
· Better understand and segment customers for personalized offers, improving lead conversion rates and boosting cross-sell and up-sell success?
· Deliver an exceptional, differentiated customer experience?
· Leverage rich sources of 3rd party data as well as big data such as social, mobile, sensors, etc.., to enrich customer insights?
“One company I recently spoke with was having a hard time creating a single consolidated invoice for each customer that included all the services purchased across business units,” says Dwight. “When they investigated, they were shocked to find that 80% of their consolidated invoices contained errors! The root cause was innaccurate, inconsistent and inconsistent customer data. This was a serious business problem costing the company a lot of money.”
Let’s do a quick test right now. Are any of these companies your customers: GE, Coke, Exxon, AT&T or HP? Do you know the legal company names for any of these organizations? Most people don’t. I’m willing to bet there are at least a handful of variations of these company names such as Coke, Coca-Cola, The Coca Cola Company, etc in your CRM application. Chances are there are dozens of variations in the numerous applications where business-critical customer data lives and these customer profiles are tied to transactions. That’s hard to clean up. You can’t just merge records because you need to maintain the transaction history and audit history. So you can’t clean up the customer data in this system and merge the duplicates.
The same holds true for B2C customers. In fact, I’m a nightmare for a large marketing organization. I get multiple offers and statements addressed to different versions of my name: Jakki Geiger, Jacqueline Geiger, Jackie Geiger and J. Geiger. But my personal favorite is when I get an offer from a company I do business with addressed to “Resident”. Why don’t they know I live here? They certainly know where to find me when they bill me!
Step 3: Transform how you view, manage and share customer data
Why do so many businesses that try to master customer data in CRM fail? Let’s be frank. CRM systems such as Salesforce.com and Siebel CRM were purpose built to support a specific set of business processes, and for the most part they do a great job. But they were never built with a focus on mastering customer data for the business beyond the scope of their own processes.
But perhaps you disagree with everything discussed so far. Or you’re a risk-taker and want to take on the challenge of bringing all master customer data that exists across the business into your CRM app. Be warned, you’ll likely encounter four major problems:
1) Your master customer data in each system has a different data model with different standards and requirements for capture and maintenance. Good luck reconciling them!
2) To be successful, your customer data must be clean and consistent across all your systems, which is rarely the case.
3) Even if you use DUNS numbers, some systems use the global DUNS number; others use a regional DUNS number. Some manage customer data at the legal entity level, others at the site level. How do you connect those?
4) If there are duplicate customer profiles in CRM tied to transactions, you can’t just merge the profiles because you need to maintain the transactional integrity and audit history. In this case, you’re dead on arrival.
There is a better way! Customer-centric, data-driven companies recognize these obstacles and they don’t rely on CRM as the single source of trusted customer data. Instead, they are transforming how they view, manage and share master customer data across the critical applications their businesses rely upon. They embrace master data management (MDM) best practices and technologies to reconcile, merge, share and govern business-critical customer data.
More and more B2B and B2C companies are investing in MDM capabilities to manage customer households and multiple views of customer account hierarchies (e.g. a legal view can be shared with finance, a sales territory view can be shared with sales, or an industry view can be shared with a business unit).
According to Gartner analysts Bill O’Kane and Kimberly Collins, “Through 2017, CRM leaders who avoid MDM will derive erroneous results that annoy customers, resulting in a 25% reduction in potential revenue gains,” according to this Gartner report, MDM is Critical to CRM Optimization, February 2014.
Are you ready to reassess your assumptions about mastering customer data in CRM?
Get the Gartner report now: MDM is Critical to CRM Optimization.