Tag Archives: Data Quality

Analytics Stories: A Banking Case Study

Right to winAs I have shared within other post within this series, businesses are using analytics to improve their internal and external facing business processes and to strengthen their “right to win” within the markets that they operate. In banking, the right to win increasingly comes from improving two core sets of business capabilities—risk management and customer service.

Significant change has occurred in risk management over the last few years following theAnalytics subprime crisis and the subsequent credit crunch. These environmental changes have put increased regulatory pressure upon banks around the world. Among other things, banks need to comply with measures aimed at limiting the overvaluation of real estate assets and at preventing money laundering. A key element of handling these is to ensuring that go forward business decisions are made consistently using the most accurate business data available. It seems clear that data consistency can determine the quality of business operations especially business risk.

At the same time as banks need to strengthen their business capabilities around operations, and in particular risk management, they also need to use better data to improve the loyalty of their existing customer base.

Banco Popular launches itself into the banking vanguard

Banco Popular is an early responder regarding the need for better banking data consistency. Its leadership created a Quality of Information Office (the Office uniquely is not based within IT but instead with the Office of the President) with the mandate of delivering on two business objectives:

  1. Ensuring compliance with governmental regulations occurs
  2. Improving customer satisfaction based on accurate and up-to-date information

Part of the second objective is aimed at ensuring that each of Banco Popular’s customers was offered the ideal products for their specific circumstances. This is interesting because by its nature it assists in obtainment of the first objective. To validate it achieves both mandates, the Office started by creating an “Information Quality Index”. The Index is created using many different types of data relating to each of the bank’s six million customers–including addresses, contact details, socioeconomic data, occupation data, and banking activity data. The index is expressed in percentage terms, which reflects the quality of the information collected for each individual customer. The overarching target set for the organization is a score of 90 percent—presently, the figure sits at 75 percent. There is room to grow and improve!

Current data management systems limit obtainment of its business goals

Unfortunately, the millions of records needed by the Quality Information Office are spread across different tables in the organization’s central computing system and must be combined into one information file for each customer to be useful to business users. The problem is that they had depended on third parties to manually pull and clean up this data. This approach with the above mandates proved too slow to be executed in timely fashion. This, in turn, has impacted the quality of their business capabilities for risk and customer service. According to Banco Popular, their approach did not create the index and other analyses “with the frequency that we wanted and examining the variables of interest to us,” explains Federico Solana, an analyst at the Banco Popular Quality of Information Office.

Creating the Quality Index was just too time consuming and costly. But not improving data delivery performance had a direct impact on decision making.

Automation proves key to better business processes

TrustTo speed up delivery of its Quality Index, Banco Popular determined it needed to automate it’s creation of great data—data which is trustworthy and timely. According to Tom Davenport, “you can’t be analytical without data and you can’t be really good at analytics without really good data”. (Analytics at Work, 2010, Harvard Business Press, Page 23). Banco Popular felt that automating the tasks of analyzing and comparing variables would increase the value of data at lower cost and ensuring a faster return on data.

In addition to fixing the Quality Index, Banco Popular needed to improve its business capabilities around risk and customer service automation. This aimed at improving the analysis of mortgages while reducing the cost of data, accelerating the return on data, and boosting business and IT productivity.

Everything, however, needed to start with the Quality Index. After the Quality Index was created for individuals, Banco Popular created a Quality of Information Index for Legal Entities and is planning to extend the return on data by creating indexes for Products and Activities. For the Quality Index related to legal entities, the bank included variables that aimed at preventing the consumption of capital as well as other variables used to calculate the probability of underpayments and Basel models. Variables are classified as essential, required, and desirable. This evaluation of data quality allows for the subsequent definition of new policies and initiatives for transactions, the network of branches, and internal processes, among other aspects. In addition, the bank is also working on the in-depth analysis of quality variables for improving its critical business processes including mortgages.

Some Parting Remarks

In the end, Banco Popular has shown the way forward for analytics. In banking the measures of performance are often known, however, what is problematic is ensuring the consistency of decision making across braches and locations. By working first on data quality, Banco Popular ensured that the quality of data measures are consistent and therefore, it can now focus its attentions on improving underling business effectiveness and efficiency.

Related links

Related Blogs

Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform

Author Twitter: @MylesSuer

 

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Data Governance | Tagged , , , | Leave a comment

10 Insights From The Road To Data Governance

10 Insights From The Road To Data Governance

10 Insights From The Road To Data Governance

I routinely have the pleasure of working with Terri Mikol, Director of Data Governance, UPMC. Terri has been spearheading data governance for three years at UPMC. As a result, she has a wealth of insights to offer on this hot topic. Enjoy her top 10 lessons learned from UPMC’s data governance journey:

1. You already have data stewards.

Commonly, health systems think they can’t staff data governance such as UPMC has becauseof a lack of funding. In reality, people are already doing data governance everywhere, across your organization! You don’t have to secure headcount; you locate these people within the business, formalize data governance as part of their job, and provide them tools to improve and manage their efforts.

2. Multiple types of data stewards ensure all governance needs are being met.

Three types of data stewards were identified and tasked across the enterprise:

I. Data Steward. Create and maintain data/business definitions. Assist with defining data and mappings along with rule definition and data integrity improvement.

II. Application Steward. One steward is named per application sourcing enterprise analytics. Populate and maintain inventory, assist with data definition and prioritize data integrity issues.

III. Analytics Steward. Named for each team providing analytics. Populate and maintain inventory, reduce duplication and define rules and self-service guidelines.

3. Establish IT as an enabler.

IT, instead of taking action on data governance or being the data governor, has become anenabler of data governance by investing in and administering tools that support metadata definition and master data management.

4. Form a governance council.

UPMC formed a governance council of 29 executives—yes, that’s a big number but UPMC is a big organization. The council is clinically led. It is co-chaired by two CMIOs and includes Marketing, Strategic Planning, Finance, Human Resources, the Health Plan, and Research. The council signs off on and prioritizes policies. Decision-making must be provided from somewhere.

5. Avoid slowing progress with process.

In these still-early days, only 15 minutes of monthly council meetings are spent on policy and guidelines; discussion and direction take priority. For example, a recent agenda item was “Length of Stay.” The council agreed a single owner would coordinate across Finance, Quality and Care Management to define and document an enterprise definition for “Length of Stay.”

6. Use examples.

Struggling to get buy-in from the business about the importance of data governance? An example everyone can relate to is “Test Patient.” For years, in her business intelligence role, Terri worked with “Test Patient.” Investigation revealed that these fake patients end up in places they should not. There was no standard for creation or removal of test patients, which meant that test patients and their costs, outcomes, etc., were included in analysis and reporting that drove decisions inside and external to UPMC. The governance program created a policy for testing in production should the need arise.

7. Make governance personal through marketing.

Terri holds monthly round tables with business and clinical constituents. These have been a game changer: Once a month, for two hours, ten business invitees meet and talk about the program. Each attendee shares a data challenge, and Terri educates them on the program and illustrates how the program will address each challenge.

8. Deliver self-service.

Providing self-service empowers your users to gain access and control to the data they need to improve their processes. The only way to deliver self-service business intelligence is to make metadata, master data, and data quality transparent and accessible across the enterprise.

9. IT can’t do it alone.

Initially, IT was resistant to giving up control, but now the team understands that it doesn’t have the knowledge or the time to effectively do data governance alone.

10. Don’t quit!

Governance can be complicated, and it may seem like little progress is being made. Terri keeps spirits high by reminding folks that the only failure is quitting.

Getting started? Assess the data governance maturity of your organization here: http://governyourdata.com/

FacebookTwitterLinkedInEmailPrintShare
Posted in Data First, Data Governance, Data Integration, Data Security | Tagged , , , | Leave a comment

Raised Expectations and New Discoveries with Great Customer Data

New Discoveries with Great Customer Data

New Discoveries with Great Customer Data

We have a landing! On November 12, the Rosetta probe arrived at its destination, a comet 300 million miles away from Earth.

Fulfilling its duty after a 10-year journey, Rosetta dropped its lander, Philae, to gather data from the comet below.

Everything about the comet so far is both challenging and fascinating, from its advanced age – 4.6 billion years old, to its hard-to- pronounce name, Churyumov-Gerasimenko.

The size of Gerasimenko? Roughly that of lower Manhattan. The shape wasn’t the potato-like image some anticipated of a typical comet. Instead it had a form that was compared to that of a “rubber-duck,” making landing trickier than expected.

To add one more challenging feature, the comet was flying at 38,000 mph. The feat of landing the probe onto the comet has been compared to hitting a speeding bullet with another speeding bullet.

All of this would have been impossible if the ESA didn’t have serious data on the lofty goal they set forth.

As this was happening, on the same day there was a quieter landing: Salesforce and LinkedIn paired up to publish research they conducted on marketing strategies by surveying 900+ senior-level B2B and B2C marketers through their social networks about their roles, marketing trends, and challenges they face.

This one finding stood out to me: “Only 17% of respondents said their company had fully integrated their customer data across all areas of the organization. However, 97% of those ‘fully integrated’ marketing leaders said they were at least somewhat effective at creating a cohesive customer journey across all touchpoints and channels.”

While not as many companies were implementing customer data like they should, those who did felt the strong impact of the benefits. It’s like knowing the difference between interacting with a potato-shaped company, or a B2C, vs. interacting with a rubber-duck-shaped company, or a B2B, for example.

Efficient customer data could help you learn how to land each one properly. While the methods for dealing with both might be similar, they’re not identical, and taking the wrong approach could mean a failed landing. One of the conclusions from the survey showed there is a “direct link between how well a marketer integrated customer data and the self-reported successes of that brand’s strategy.”

When interviewed by MSNBC on the comet landing, Bill Nye, also known as “the Science Guy,” had many positive things to say on the historic event. One question he answered was why do programs like the ESA exist – or basically, why do we go to space?

Nye had two replies: “It raises the expectations of your society,” and “You’re going to make discoveries.”

customer dataMarketers armed with insights from powerful customer data can have their own “landing on a comet” moment. Properly integrated customer data means you’ll be making new discoveries about your own clientele while simultaneously raising the expectations of your business.

The world couldn’t progress forward without quality data, whether in the realm of retail or planetary science. We put a strong emphasis on validating and cleanse your customer data at the point of entry or the point of collection.

Check out a quick video demo here of three data quality solutions: Email Verification, Address Verification, and Phone Validation.

 

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality, Data Services | Tagged , , , | Leave a comment

How Email Marketers Can Keep Up With Changes to Their Industry

Keep Up With Changes

Keep Up With Changes

Email has come a long way since its beginning. Two years after the U.S. launched a rocket to the moon, programmer Raymond Tomlinson launched the first email, a message that read “QWERTYUIOP” in 1971.

In 1991, when the World Wide Web was created, email then had the opportunity to evolve into the mainstream form of communication it is today.

The statistics for modern day email are staggering. Worldwide, there are 2.2 billion email users as of 2012, according to MarketingProfs.

With all these messages flying about, you know there’s going to be some email overload. Google’s Gmail alone has 425 million users worldwide. ESPs know people have too much email to deal with, and there’s a lot of noise out there. More than 75% of the world’s email is spam.

Gmail is one of the applications that recently responded to this problem, and all email marketers need to be aware.

On October 22, Google announced Inbox.

Google’s Inbox takes several steps to bring structure to the abundant world of email with these features:

  • Categorizes and bundles emails.
  • Highlights important content within the body of the email.
  • Allows users to customize messages by adding their own reminders.

This latest update to Gmail is just one small way that the landscape of email marketing and audience preferences is changing all the time.

As we integrate more technology into our daily lives, it only makes sense that we use digital messages as a means of communication more often. What will this mean for email in the future? How will marketers adjust to the new challenges email marketing will present at larger volumes, with audiences wanting more segmentation and personalization?

All About eMail Virtual Conference

All About eMail Virtual Conference

One easy way to stay on top of these and other changes to the e-mail landscape is talking to your peers and experts in the industry. Luckily, an opportunity is coming up — and it’s free.

Make sure you check out the All About eMail Virtual Conference & Expo on November 13. It’s a virtual event, which means you can attend without leaving your desk!

It’s a one-day event with the busy email marketer in mind. Register for free and be sure to join us for our presentation, “Maximizing Email Campaign Performance Through Data Quality.”

Other strategic sessions include email marketing innovations presented by Forrester Research, mobile email, ROI, content tips, email sending frequency, and much more. (See the agenda here.)

This conference is just one indication that email marketing is still relevant, but only if email marketers adjust to changing audience preferences. With humble beginnings in 1971 email has come a long way. Email marketing has the best ROI in the business, and will continue to have value long into the future.

FacebookTwitterLinkedInEmailPrintShare
Posted in Customer Acquisition & Retention, Data Quality, Retail | Tagged , , , , , | Leave a comment

Analytics Stories: A Financial Services Case Study

As I indicated in my last case study regarding competing on analytics, Thomas H. Davenport believes “business processes are among the last remaining points of differentiation.” For this reason, Davenport contends that businesses that create a sustainable right to win use analytics to “wring every last drop of value from their processes”. For financial services, the mission critical areas needing process improvement center are around improving the consistency of decision making and making the management of regulatory and compliance more efficient and effective.

Why does Fannie Mae need to compete on analytics?

Fannie MaeFannie Mae is in the business of enabling people to buy, refinance, or rent homes. As a part of this, Fannie Mae says it is all about keeping people in their homes and getting people into new homes. Foundational to this mission is the accurate collection and reporting of data for decision making and risk management. According to Tracy Stephan at Fannie Mae, their “business needs to have the data to make decisions in a more real time basis. Today, this is all about getting the right data to the right people at the right time”.

Fannie Mae claims when the mortgage crisis hit, a lot of the big banks stopped lending and this meant that Fannie Mae among others needed to pick up the slack. Their action here, however, caused the Federal Government to require them to report monthly and quarterly against goals that the Federal Government set for it. “This meant that there was not room for error in how data gets reported”. In the end, Fannie Mae says three business imperatives drove it’s need to improve its reporting and its business processes:

  1. To ensure that go forward business decisions were made consistently using the most accurate business data available
  2. To avoid penalties by adhering to Dodd-Frank and other regulatory requirements established for it after the 2008 Global Financial Crisis
  3. To comply with reporting to Federal Reserve and Wall Street regarding overall business risk as a function of: data quality and accuracy, credit-worthiness of loans, and risk levels of investment positions.

Delivering required Fannie Mae to change how it managed data

AnalyticsGiven these business imperatives, IT leadership quickly realized it needed to enable the business to use data to truly drive better business processes from end to end of the organization. However, this meant enabling Fannie Mae’s business operations teams to more effectively and efficiently manage data. This caused Fannie Mae to determine that it needed a single source of truth whether it was for mortgage applications or the passing of information securely to investors. This need required Fannie Mae to establish the ability to share the same data across every Fannie Mae repository.

But there was a problem. Fannie Mae needed clean and correct data collected and integrated from more than 100 data sources. Fannie Mae determined that doing so with its current data processes could not scale. And as well, it determined that its data processes would not allow it to meet its compliance reporting requirements. At the same time, Fannie Mae needed to deliver more proactive management of compliance. This required that it know how critical business data enters and flows through each of its systems. This includes how data was changed by multiple internal processing and reporting applications. As well, Fannie Mae leadership felt that this was critical to ensure traceability to the individual user.

The solution

analyticsPer its discussions with business customers, Fannie Mae’s IT leadership determined that it needed to get real time, trustworthy data to improve its business operations and to improve its business processes and decision making. As said, these requirements could not be met with its historical approaches to integrating and managing data.

Fannie Mae determined that it needed to create a platform that was high availability, scalable, and largely automating its management of data quality management.  At the same time, the platform needed to provide the ability to create a set of business glossaries with clear data lineages. Fannie Mae determined it needed effectively a single source of truth across all of its business systems. According to Tracy Stephan, IT Director, Fannie Mae, “Data quality is the key to the success of Fannie Mae’s mission of getting the right people into the right homes. Now all our systems look at the same data – that one source of truth – which gives us great comfort.” To learn more specifics about how Fannie Mae improved its business processes and demonstrated that it is truly “data driven”, please click on this video of their IT leadership.

Related links
Solution Brief: The Intelligent Data Platform
Related Blogs
Thomas Davenport Book “Competing On Analytics”
Competing on Analytics
The Business Case for Better Data Connectivity
The CFO Viewpoint upon Data
What an enlightened healthcare CEO should tell their CIO?

Twitter: @MylesSuer

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Financial Services | Tagged , , , | Leave a comment

Analytics Stories: A Healthcare Case Study

As I indicated in Competing on Analytics, if you ask CIOs today about the importance of data to their enterprises, they will likely tell you about their business’ need to “compete on analytics”, to deliver better business insights, and to drive faster business decision making. These have a high place on the business and CIO agendas, according to Thomas H. Davenport, because “at a time when firms in many industries offer similar products and use comparable technologies, business processes are among the last remaining points of differentiation.” For this reason, Davenport claims timely analytics enables companies to “wring every last drop of value from their processes”.

So is anyone showing the way on how to compete on analytics?

healthcareUMass Memorial Health Care is a great example of an enterprise that is using analytics to “wring every last drop of value from their processes”. However, before UMass could compete on data, it needed to create data that could be trusted by its leadership team.

Competing on analytics requires trustworthy data

TrustworthyAt UMass, they found that they could not accurately measure the size of their patient care population. This is a critical metric for growing market share. Think about how hard it would be to operate any business without an accurate count of how many customers are being served. Lacking this information hindered UMass’ ability to make strategic market decisions and drive key business and clinical imperatives.

A key need at UMASS was to determine a number of critical success factors for its business. This included obviously the size of the patient population but it also included the composition of the patient population and the number of unique patients served by primary care physician providers across each of its business locations. Without this knowledge, UMASS found itself struggling to make effective decisions regarding its strategic direction, its clinical policies, and even its financial management. And all of these factors really matter in an era of healthcare reform.

Things proved particularly complex at UMass since they act as what is called a “complex integrated delivery network”. This means that portions of its business effectively operated under different business models. This, however, creates a data challenge in healthcare. Unlike other diversified enterprises, UMASS needs an operating model–“the necessary level of business process integration and standardization for delivering its services to customers”[1]– that could support different elements of its business but be unified for integrative analysis. This matters because in UMass’ case, there is a single denominator, the patient. And to be clear, while each of UMASS’ organizations could depend on their data to meet their needs, UMASS lacked an integrative view into patients.

Departmental Data may be good for a department but not for the Enterprise

Data AnalysisUMass had adequate data for each organization, such as delivering patient care or billing for a specific department or hospital, but it was inadequate for system wide measures. And aggregation and analytics, which needed to combine data across systems and organizations was stymied by data inconsistencies, incomplete population of fields, or other types of data quality problems between each system. These issues made it impossible to provide the analytics UMass’ senior managers needed. For example, UMass’ aggregated data contained duplicate patients—people who had been treated at different sites and had different medical record numbers, but who were in fact the same patients.

A key need for UMass creating the ability to compete on analytics was to measure and report on the number of primary care patients being treated across their entire healthcare system. UMass leadership saw this as a key planning and strategy metric because primary care patients today are the focus of investments in wellness and prevention programs, as well as a key source of specialty visits and inpatients. According to George Brenckle, Senior Vice President and CIO, they “had an urgent need for improved clinical and business intelligence across all our operations, we needed an integrated view of patient information, encounters, providers, and UMass Memorial locations to support improved decision making, advance the quality of patient care, and increase patient loyalty. To put the problem into perspective, we have more than 100 applications—some critical, some not so critical—and our ultimate ambition was to integrate all of these areas of business, leverage analytics, and drive clinical and operational excellence.”

The UMASS Solution

solutionThe UMass solved the above issues by creating an integrated way to view of patient information, encounters, providers, and UMass Memorial locations. This allowed UMass to compute the number of primary care physician patients cared for. In order to make this work, the solution merged data from the core hospital information applications and processed this data for quality issue that prevented UMass from deriving the primary care patient count. Armed with this, data integration helped UMass Memorial improve its clinical outcomes, grow its patient population, increase process efficiency, and ultimately maximize its return on data. As well UMASS gained a reliable measure of its primary care patient population, UMASS now was able to determine an accurate counts for unique patients served by its hospitals (3.2 million), active patients (i.e., those treated within the last three years—approximately 1.7 million), and unique providers (approximately 24,000).

According to Brenckle, data integration transformed their analytical capabilities and decision making. “We know who our primary care patients are and how many there are of them, whether the volume of patients is rising or decreasing, how many we are treating in an ambulatory or acute care setting, and what happens to those patients as they move through the healthcare system. We are able to examine which providers they saw and at which location. This data is vital to improving clinical outcomes, growing the patient population, and increasing efficiency.”

Related links

Solution Brief: The Intelligent Data Platform
Details on the UMASS Solution

Related Blogs

Thomas Davenport Book “Competing On Analytics”
Competing on Analytics
The Business Case for Better Data Connectivity
CIO explains the importance of Big Data to Healthcare
The CFO Viewpoint upon Data
What an enlightened healthcare CEO should tell their CIO?
Twitter: @MylesSuer

 


 

[1] Enterprise Architecture as Business Strategy, Jeanne Ross, Harvard Business School Press, Page 8
FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Data Governance, Data Quality | Tagged , , , | Leave a comment

More Evidence That Data Integration Is Clearly Strategic

Data Integration Is Clearly Strategic

Data Integration Is Strategic

A recent study from Epicor Software Corporation surveyed more than 300 IT and business decision-makers.  The study results highlighted the biggest challenges and opportunities facing Australian businesses. The independent research report “From Business Processes to Product Distribution” was based upon a survey of Australian organizations with more than 20 employees.

Key findings from the report include:

  • 65% of organizations cite data processing and integration as hampering distribution capability, with nearly half claiming their existing software and ERP is not suitable for distribution.
  • Nearly two-thirds of enterprises have some form of distribution process, involving products or services.
  • More than 80% of organizations have at least some problem with product or service distribution.
  • More than 50% of CIOs in organizations with distribution processes believe better distribution would increase revenue and optimize business processes, with a further 38% citing reduced operating costs.

The core findings: “With better data integration comes better automation and decision making.”

This report is one of many I’ve seen over the years that come to the same conclusion.  Most of those involved with the operations of the business don’t have access to key data points they need, thus they can’t automate tactical decisions, and also cannot “mine” the data, in terms of understanding the true state of the business.

The more businesses deal with building and moving products, the more data integration becomes an imperative value.  As stated in this survey, as well as others, the large majority cite “data processing and integration as hampering distribution capabilities.”

Of course, these issues goes well beyond Australia.  Most enterprises I’ve dealt with have some gap between the need to share key business data to support business processes, and decision support, and what current exists in terms of data integration capabilities.

The focus here is on the multiple values that data integration can bring.  This includes:

  • The ability to track everything as it moves from manufacturing, to inventory, to distribution, and beyond.  You to bind these to core business processes, such as automatic reordering of parts to make more products, to fill inventory.
  • The ability to see into the past, and to see into the future.  The emerging approaches to predictive analytics allow businesses to finally see into the future.  Also, to see what went truly right and truly wrong in the past.

While data integration technology has been around for decades, most businesses that both manufacture and distribute products have not taken full advantage of this technology.  The reasons range from perceptions around affordability, to the skills required to maintain the data integration flow.  However, the truth is that you really can’t afford to ignore data integration technology any longer.  It’s time to create and deploy a data integration strategy, using the right technology.

This survey is just an instance of a pattern.  Data integration was considered optional in the past.  With today’s emerging notions around the strategic use of data, clearly, it’s no longer an option.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data First, Data Integration, Data Integration Platform, Data Quality | Tagged , , , | Leave a comment

At Valspar Data Management is Key to Controlling Purchasing Costs

Steve Jenkins, Global IT Director at Valspar

Steve Jenkins is working to improve information management maturity at Valspar

Raw materials costs are the company’s single largest expense category,” said Steve Jenkins, Global IT Director at Valspar, at MDM Day in London. “Data management technology can help us improve business process efficiency, manage sourcing risk and reduce RFQ cycle times.”

Valspar is a $4 billion global manufacturing company, which produces a portfolio of leading paint and coating brands. At the end of 2013, the 200 year old company celebrated record sales and earnings. They also completed two acquisitions. Valspar now has 10,000 employees operating in 25 countries.

As is the case for many global companies, growth creates complexity. “Valspar has multiple business units with varying purchasing practices. We source raw materials from 1,000s of vendors around the globe,” shared Steve.

“We want to achieve economies of scale in purchasing to control spending,” Steve said as he shared Valspar’s improvement objectives. “We want to build stronger relationships with our preferred vendors. Also, we want to develop internal process efficiencies to realize additional savings.”

Poorly managed vendor and raw materials data was impacting Valspar’s buying power

Data management at Valspar

“We realized our buying power was limited by the age and quality of available vendor and raw materials data.”

The Valspar team, who sharply focuses on productivity, had an “Aha” moment. “We realized our buying power was limited by the age and quality of available vendor data and raw materials data,” revealed Steve. 

The core vendor data and raw materials data that should have been the same across multiple systems wasn’t. Data was often missing or wrong. This made it difficult to calculate the total spend on raw materials. It was also hard to calculate the total cost of expedited freight of raw materials. So, employees used a manual, time-consuming and error-prone process to consolidate vendor data and raw materials data for reporting.

These data issues were getting in the way of achieving their improvement objectives. Valspar needed a data management solution.

Valspar needed a single trusted source of vendor and raw materials data

Informatica MDM supports vendor and raw materials data management at Valspar

The team chose Informatica MDM as their enterprise hub for vendors and raw materials

The team chose Informatica MDM, master data management (MDM) technology. It will be their enterprise hub for vendors and raw materials. It will manage this data centrally on an ongoing basis. With Informatica MDM, Valspar will have a single trusted source of vendor and raw materials data.

Informatica PowerCenter will access data from multiple source systems. Informatica Data Quality will profile the data before it goes into the hub. Then, after Informatica MDM does it’s magic, PowerCenter will deliver clean, consistent, connected and enriched data to target systems.

Better vendor and raw materials data management results in cost savings

Valspar Chameleon Jon

Valspar will gain benefits by fueling applications with clean, consistent, connected and enriched data

Valspar expects to gain the following business benefits:

  • Streamline the RFQ process to accelerate raw materials cost savings
  • Reduce the total number of raw materials SKUs and vendors
  • Increase productivity of staff focused on pulling and maintaining data
  • Leverage consistent global data visibly to:
    • increase leverage during contract negotiations
    • improve acquisition due diligence reviews
    • facilitate process standardization and reporting

 

Valspar’s vision is to tranform data and information into a trusted organizational assets

“Mastering vendor and raw materials data is Phase 1 of our vision to transform data and information into trusted organizational assets,” shared Steve. In Phase 2 the Valspar team will master customer data so they have immediate access to the total purchases of key global customers. In Phase 3, Valspar’s team will turn their attention to product or finished goods data.

Steve ended his presentation with some advice. “First, include your business counterparts in the process as early as possible. They need to own and drive the business case as well as the approval process. Also, master only the vendor and raw materials attributes required to realize the business benefit.”

Total Supplier Information Management eBook

Click here to download the Total Supplier Information Management eBook

Want more? Download the Total Supplier Information Management eBook. It covers:

  • Why your fragmented supplier data is holding you back
  • The cost of supplier data chaos
  • The warning signs you need to be looking for
  • How you can achieve Total Supplier Information Management

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, Data Integration, Data Quality, Manufacturing, Master Data Management, Operational Efficiency, PowerCenter, Vertical | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

8 Information Management Challenges for UDI Compliance

“My team spends far too much time pulling together medical device data that’s scattered across different systems and reconciling it in spreadsheets to create compliance reports.” This quotation from a regulatory affairs leader at a medical device manufacturer highlights the impact of poorly managed medical device data on compliance reporting, such as the reports needed for the FDA’s Universal Device Identification (UDI) regulation. In fact, an overreliance on manual, time-consuming processes brings an increased risk of human error in UDI compliance reports.

frustrated_man_computer

Is your compliance team manually reconciling data for UDI compliance reports?

If you are an information management leader working for a medical device manufacturer, and your compliance team needs quick and easy access to medical device data for UDI compliance reporting, I have five questions for you:

1) How many Class III and Class II devices do you have?
2) How many systems or reporting data stores contain data about these medical devices?
3) How much time do employees spend manually fixing data errors before the data can be used for reporting?
4) How do you plan to manage medical device data so the compliance team can quickly and easily produce accurate reports for UDI Compliance?
5) How do you plan to help the compliance team manage the multi-step submission process?

Watch this on-demand webinar "3 EIM Best Practices for UDI Compliance"

Watch this on-demand webinar “3 EIM Best Practices for UDI Compliance”

For some helpful advice from data management experts, watch this on-demand webinar “3 Enterprise Information Management (EIM) Best Practices for UDI Compliance.”

The deadline to submit the first UDI compliance report to the FDA for Class III devices is September 24, 2014. But, the medical device data needed to produce the report is typically scattered among different internal systems, such as Enterprise Resource Planning (ERP) e.g. SAP and JD Edwards, Product Lifecycle Management (PLM), Manufacturing Execution Systems (MES) and external 3rd party device identifiers.

The traditional approach for dealing with poorly managed data is the compliance team burns the midnight oil to bring together and then manually reconcile all the medical device data in a spreadsheet. And, they have to do this each and every time a compliance report is due. The good news is your compliance team doesn’t have to.

Many medical device manufacturers are are leveraging their existing data governance programs, supported by a combination of data integration, data quality and master data management (MDM) technology to eliminate the need for manual data reconciliation. They are centralizing their medical device data management, so they have a single source of trusted medical device data for UDI compliance reporting as well as other compliance and revenue generating initiatives.

Get UDI data management advice from data experts Kelle O'Neal, Managing Partner at First San Francisco Partners and Bryan Balding, MDM Specialist at Informatica
Get UDI data management advice from data experts Kelle O’Neal, Managing Partner at First San Francisco Partners and Bryan Balding, MDM Specialist at Informatica

During this this on-demand webinar, Kelle O’Neal, Managing Partner at First San Francisco Partners, covers the eight information management challenges for UDI compliance as well as best practices for medical device data management.

Bryan Balding, MDM Solution Specialist at Informatica, shows you how to apply these best practices with the Informatica UDI Compliance Solution.

You’ll learn how to automate the process of capturing, managing and sharing medical device data to make it quicker and easier to create the reports needed for UDI compliance on ongoing basis.

 

 

20 Questions & Answers about Complying with the FDA Requirement for Unique Device Identification (UDI)

20 Questions & Answers about Complying with the FDA Requirement
for Unique Device Identification (UDI)

Also, we just published a joint whitepaper with First San Francisco Partners, Information Management FAQ for UDI: 20 Questions & Answers about Complying with the FDA Requirement for Unique Device Identification (UDI). Get answers to questions such as:

What is needed to support an EIM strategy for UDI compliance?
What role does data governance play in UDI compliance?
What are the components of a successful data governance program?
Why should I centralize my business-critical medical device data?
What does the architecture of a UDI compliance solution look like?

I invite you to download the UDI compliance FAQ now and share your feedback in the comments section below.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Quality, Enterprise Data Management, Life Sciences, Manufacturing, Master Data Management, Vertical | Tagged , , , , , , , , , , , , , , | Leave a comment