Category Archives: Data Quality

The Total Data Quality Movement is Long Overdue!

Total Data QualityTotal Quality Management, as it relates to products and services has it’s roots in the 1920s.  The 1960’s provided a huge boost with rise of guru’s such as Deming, Juran and Crosby.  Whilst each had their own contribution, common principles for TQM that emerged in this era remain in practice today:

  1. Management (C-level management) is ultimately responsible for quality
  2. Poor quality has a cost
  3. The earlier in the process you address quality, the lower the cost of correcting it
  4. Quality should be designed into the system

So for 70 years industry in general has understood the cost of poor quality, and how to avoid these costs.  So why is it that in 2014 I was party to a conversation that included the statement:

“I only came to the conference to see if you (Informatica) have solved the data quality problem.”

Ironically the TQM movement was only possible based on the analysis of data, but this is the one aspect that is widely ignored during TQM implementation.  So much for ‘Total’ Quality Management.

This person is not alone in their thoughts.  Many are waiting for the IT knight in shining armour, the latest and greatest data quality tools secured on their majestic steed, to ride in and save the day.  Data quality dragon slayed, cold drinks all round, job done.  This will not happen.  Put data quality in the context of total quality management principles to see why:  A single department cannot deliver data quality alone, regardless of the strength of their armoury.

I am not sure anyone would demand a guarantee of a high quality product from their machinery manufacturers.  Communications providers cannot deliver high quality customer services organisations through technology alone.  These suppliers have will have an influence on final product quality, but everyone understands equipment cannot deliver in isolation.  Good quality raw materials, staff that genuinely takes pride in their work and the correct incentives are key to producing high quality products and services.

So why is there an expectation that data quality can be solved by tools alone?

At a minimum senior management support is required to push other departments to change their behaviour and/or values.  So why aren’t senior management convinced that data quality is a problem worth their attention the way product & service quality is?

The fact that poor data quality has a high cost is reasonably well known via anecdotes.  However, cost has not been well quantified, and hence fails to grab the attention of senior management.  A 2005 paper by Richard Marsh[i] states:  “Research and reports by industry experts, including Gartner Group, PriceWaterhouseCoopers and The Data Warehousing Institute clearly identify a crisis in data quality management and a reluctance among senior decision makers to do enough about it.”  Little has changed since 2005.

However, we are living in a world where data generation, processing and consumption are increasing exponentially.  With all the hype and investment in data, we face the grim prospect of fully embracing an age of data-driven-everything founded on a very poor quality raw material.  Data quality is expected to be applied after generation, during the analytic phase.  How much will that cost us?  In order to function effectively, our new data-driven world must have high quality data running through every system and activity in an organization.

The Total Data Quality Movement is long overdue.

Only when every person in every organization understands the value of the data, do we have a chance of collectively solving the problem of poor data quality.  Data quality must be considered from data generation, through transactional processing and analysis right until the point of archiving.

Informatica DQ supports IT departments in automating data correction where possible, and highlighting poor data for further attention where automation is not possible.  MDM plays an important role in sustaining high quality data.  Informatica tools empower the business to share the responsibility for total data quality.

We are ready for Total Data Quality, but continue to await the Total Data Quality Movement to get off the ground.

(If you do not have time to waiting for TDQM to gain traction, we can help you measure the cost of poor quality data in your organization to win corporate buy-in now.)


[i] http://www.palgrave-journals.com/dbm/journal/v12/n2/pdf/3240247a.pdf

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality | Tagged | Leave a comment

In a Data First World, Knowledge Really Is Power!

Knowledge Really IS Power!

Knowledge Really IS Power!

I have two quick questions for you. First, can you name the top three factors that will increase your sales or boost your profit? And second, are you sure about that?

That second question is a killer because most people — no matter if they’re in marketing, sales or manufacturing — rely on incomplete, inaccurate or just plain wrong information. Regardless of industry, we’ve been fixated on historic transactions because that’s what our systems are designed to provide us.

“Moneyball: The Art of Winning an Unfair Game” gives a great example of what I mean. The book (not the movie) describes Billy Beane hiring MBAs to map out the factors that would win a baseball game. They discovered something completely unexpected: That getting more batters on base would tire out pitchers. It didn’t matter if batters had multi-base hits, and it didn’t even matter if they walked. What mattered was forcing pitchers to throw ball after ball as they faced an unrelenting string of batters. Beane stopped looking at RBIs, ERAs and even home runs, and started hiring batters who consistently reached first base. To me, the book illustrates that the most useful knowledge isn’t always what we’ve been programmed to depend on or what is delivered to us via one app or another.

For years, people across industries have turned to ERP, CRM and web analytics systems to forecast sales and acquire new customers. By their nature, such systems are transactional, forcing us to rely on history as the best predictor of the future. Sure it might be helpful for retailers to identify last year’s biggest customers, but that doesn’t tell them whose blogs, posts or Tweets influenced additional sales. Isn’t it time for all businesses, regardless of industry, to adopt a different point of view — one that we at Informatica call “Data-First”? Instead of relying solely on transactions, a data-first POV shines a light on interactions. It’s like having a high knowledge IQ about relationships and connections that matter.

A data-first POV changes everything. With it, companies can unleash the killer app, the killer sales organization and the killer marketing campaign. Imagine, for example, if a sales person meeting a new customer knew that person’s concerns, interests and business connections ahead of time? Couldn’t that knowledge — gleaned from Tweets, blogs, LinkedIn connections, online posts and transactional data — provide a window into the problems the prospect wants to solve?

That’s the premise of two startups I know about, and it illustrates how a data-first POV can fuel innovation for developers and their customers. Today, we’re awash in data-fueled things that are somehow attached to the Internet. Our cars, phones, thermostats and even our wristbands are generating and gleaning data in new and exciting ways. That’s knowledge begging to be put to good use. The winners will be the ones who figure out that knowledge truly is power, and wield that power to their advantage.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Aggregation, Data Governance, Data Integration, Data Quality, Data Transformation | Tagged , , | Leave a comment

The King of Benchmarks Rules the Realm of Averages

A mid-sized insurer recently approached our team for help. They wanted to understand how they fell short in making their case to their executives. Specifically, they proposed that fixing their customer data was key to supporting the executive team’s highly aggressive 3-year growth plan. (This plan was 3x today’s revenue).  Given this core organizational mission – aside from being a warm and fuzzy place to work supporting its local community – the slam dunk solution to help here is simple.  Just reducing the data migration effort around the next acquisition or avoiding the ritual annual, one-off data clean-up project already pays for any tool set enhancing data acquisitions, integration and hygiene.  Will it get you to 3x today’s revenue?  It probably won’t.  What will help are the following:

The King of Benchmarks Rules the Realm of Averages

Making the Math Work (courtesy of Scott Adams)

Hard cost avoidance via software maintenance or consulting elimination is the easy part of the exercise. That is why CFOs love it and focus so much on it.  It is easy to grasp and immediate (aka next quarter).

Soft cost reduction, like staff redundancies are a bit harder.  Despite them being viable, in my experience very few decision makers want work on a business case to lay off staff.  My team had one so far. They look at these savings as freed up capacity, which can be re-deployed more productively.   Productivity is also a bit harder to quantify as you typically have to understand how data travels and gets worked on between departments.

However, revenue effects are even harder and esoteric to many people as they include projections.  They are often considered “soft” benefits, although they outweigh the other areas by 2-3 times in terms of impact.  Ultimately, every organization runs their strategy based on projections (see the insurer in my first paragraph).

The hardest to quantify is risk. Not only is it based on projections – often from a third party (Moody’s, TransUnion, etc.) – but few people understand it. More often, clients don’t even accept you investigating this area if you don’t have an advanced degree in insurance math. Nevertheless, risk can generate extra “soft” cost avoidance (beefing up reserve account balance creating opportunity cost) but also revenue (realizing a risk premium previously ignored).  Often risk profiles change due to relationships, which can be links to new “horizontal” information (transactional attributes) or vertical (hierarchical) from parent-child relationships of an entity and the parent’s or children’s transactions.

Given the above, my initial advice to the insurer would be to look at the heartache of their last acquisition, use a benchmark for IT productivity from improved data management capabilities (typically 20-26% – Yankee Group) and there you go.  This is just the IT side so consider increasing the upper range by 1.4x (Harvard Business School) as every attribute change (last mobile view date) requires additional meetings on a manager, director and VP level.  These people’s time gets increasingly more expensive.  You could also use Aberdeen’s benchmark of 13hrs per average master data attribute fix instead.

You can also look at productivity areas, which are typically overly measured.  Let’s assume a call center rep spends 20% of the average call time of 12 minutes (depending on the call type – account or bill inquiry, dispute, etc.) understanding

  • Who the customer is
  • What he bought online and in-store
  • If he tried to resolve his issue on the website or store
  • How he uses equipment
  • What he cares about
  • If he prefers call backs, SMS or email confirmations
  • His response rate to offers
  • His/her value to the company

If he spends these 20% of every call stringing together insights from five applications and twelve screens instead of one frame in seconds, which is the same information in every application he touches, you just freed up 20% worth of his hourly compensation.

Then look at the software, hardware, maintenance and ongoing management of the likely customer record sources (pick the worst and best quality one based on your current understanding), which will end up in a centrally governed instance.  Per DAMA, every duplicate record will cost you between $0.45 (party) and $0.85 (product) per transaction (edit touch).  At the very least each record will be touched once a year (likely 3-5 times), so multiply your duplicated record count by that and you have your savings from just de-duplication.  You can also use Aberdeen’s benchmark of 71 serious errors per 1,000 records, meaning the chance of transactional failure and required effort (% of one or more FTE’s daily workday) to fix is high.  If this does not work for you, run a data profile with one of the many tools out there.

If the sign says it - do it!

If the sign says it – do it!

If standardization of records (zip codes, billing codes, currency, etc.) is the problem, ask your business partner how many customer contacts (calls, mailing, emails, orders, invoices or account statements) fail outright and/or require validation because of these attributes.  Once again, if you apply the productivity gains mentioned earlier, there are you savings.  If you look at the number of orders that get delayed in form of payment or revenue recognition and the average order amount by a week or a month, you were just able to quantify how much profit (multiply by operating margin) you would be able to pull into the current financial year from the next one.

The same is true for speeding up the introduction or a new product or a change to it generating profits earlier.  Note that looking at the time value of funds realized earlier is too small in most instances especially in the current interest environment.

If emails bounce back or snail mail gets returned (no such address, no such name at this address, no such domain, no such user at this domain), e(mail) verification tools can help reduce the bounces. If every mail piece (forget email due to the miniscule cost) costs $1.25 – and this will vary by type of mailing (catalog, promotion post card, statement letter), incorrect or incomplete records are wasted cost.  If you can, use fully loaded print cost incl. 3rd party data prep and returns handling.  You will never capture all cost inputs but take a conservative stab.

If it was an offer, reduced bounces should also improve your response rate (also true for email now). Prospect mail response rates are typically around 1.2% (Direct Marketing Association), whereas phone response rates are around 8.2%.  If you know that your current response rate is half that (for argument sake) and you send out 100,000 emails of which 1.3% (Silverpop) have customer data issues, then fixing 81-93% of them (our experience) will drop the bounce rate to under 0.3% meaning more emails will arrive/be relevant. This in turn multiplied by a standard conversion rate (MarketingSherpa) of 3% (industry and channel specific) and average order (your data) multiplied by operating margin gets you a   benefit value for revenue.

If product data and inventory carrying cost or supplier spend are your issue, find out how many supplier shipments you receive every month, the average cost of a part (or cost range), apply the Aberdeen master data failure rate (71 in 1,000) to use cases around lack of or incorrect supersession or alternate part data, to assess the value of a single shipment’s overspend.  You can also just use the ending inventory amount from the 10-k report and apply 3-10% improvement (Aberdeen) in a top-down approach. Alternatively, apply 3.2-4.9% to your annual supplier spend (KPMG).

You could also investigate the expediting or return cost of shipments in a period due to incorrectly aggregated customer forecasts, wrong or incomplete product information or wrong shipment instructions in a product or location profile. Apply Aberdeen’s 5% improvement rate and there you go.

Consider that a North American utility told us that just fixing their 200 Tier1 suppliers’ product information achieved an increase in discounts from $14 to $120 million. They also found that fixing one basic out of sixty attributes in one part category saves them over $200,000 annually.

So what ROI percentages would you find tolerable or justifiable for, say an EDW project, a CRM project, a new claims system, etc.? What would the annual savings or new revenue be that you were comfortable with?  What was the craziest improvement you have seen coming to fruition, which nobody expected?

Next time, I will add some more “use cases” to the list and look at some philosophical implications of averages.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Migration, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , | Leave a comment

Driving Third Wave Businesses: Ensuring Your Business Has The Right To Win

TofflerAs adjunct university faculty, I get to talk to students about how business strategy increasingly depends upon understanding how to leverage information. To make discussion more concrete, I share with students the work of Alvin Toffler. In The Third Wave, Toffler asserts that we live in a world where competition will increasingly take place upon the currency and usability of information.

In a recent interview, Toffler said that “given the acceleration of change; companies, individuals, and governments base many of their daily decisions on obsoledge—knowledge whose shelf life has expired.” He continues by stating that “companies everywhere are trying to put a price on certain forms of intellectual property. But if…knowledge is at the core of the money economy, than we need to understand knowledge much better than we do now. And tiny insights can yield huge outputs”. 

Driving better information management in the information age

information age

To me, this drives to three salient conclusions for information age businesses:

  1. Information needs to drive further down organizations because top decision makers do not have the background to respond at the pace of change.
  2. Information needs to be available faster which means that we need to reducing the processing time for structure and unstructured information sources.
  3. Information needs to be available when the organization is ready for it. For multinational enterprises this means “Always On” 24/7 across multiple time zones on any device.

Effective managers today are effective managers of people and information

information

Effective managers today are effective managers of information. Because processing may take too much time, Toffler’s remarks suggest to me we need to consider human information—the ideas and communications we share every day—within the mix of getting access to the right information when it is needed and where it is needed. Now more than ever is the time for enterprises to ensure their decision makers have the timely information to make better business decisions when they are relevant. This means that unstructured data, a non-trivial majority of business information, needs to be made available to business users and related to existing structured sources of data.

Derick Abell says that “for (management) control to be effective, data must be timely and provided at interval that allows effective intervention”. Today this is a problem for most information businesses. As I see it, information optimization is the basis of powering the enterprise through “Third Wave” business competition. Organizations that have the “right to win” will have as a core capability better-than-class access to current information for decision makers.

Putting in place a winning information management strategy

If you talk to CIOs today, they will tell you that they are currently facing 4 major information age challenges.

  • Mobility—Enabling their users to view data anytime, anyplace, and any device
  • Information Trust—Making data dependable enough for business decisions as well as governing data across all business systems.
  • Competing on Analytics—Getting information to business users fast enough to avoid Toffler’s Obsoledge.
  • New and Big Data Sources—Connecting existing data to new value added sources of data.

Some information age

siloedLots of things, however, get in the way of delivering on the promises of the Information Age. Our current data architecture is siloed, fragile, and built upon layer after layer of spaghetti code integrations. Think about what is involved just to cobble together data on a company’s supply chain. A morass of structured data systems have vendor and transaction records locked up in application databases and data warehouses all over the extended enterprise. So it is not amazing that enterprises struggle to put together current, relevant data to run their businesses upon. Functions like finance depend largely upon manual extracts being massaged and integrated in spreadsheets because of concern over the quality of data being provided by financial systems. Some information age!

How do we connect to new sources of data?

At the same time, many are trying today to extend the information architecture to add social media data, mobile location data, and even machine data. Much of this data is not put together in the same way as data in an application database or data warehouse. However, being able to relate this data to existing data sources can yield significant benefits. Think about the potential benefit of being able to relate social interactions and mobile location data to sales data or to relate machine data to compliance data.

A big problem is many of these new data types potentially have even more data quality gaps than historical structured data systems. Often the signal to noise for this data can be very low for this reason. But this data can be invaluable to business decision making. For this reason, this data needs to be cleaned up and related to older data sources. Finally, it needs to be provided to business users in whatever manner they want to consume it. 

How then do we fix the Information Age?

fixing

Enabling the kind of Information Age that Toffler imagined requires two things. Enterprises fix their data management and enable the information intelligence needed to drive real business competitive advantage. Fixing data management involves delivering good data that business users can safely make decisions from. It, also, involves ensuring that data once created is protected. CFOs that we have talked to say Target was a watershed event for them—something that they expect will receive more and more auditing attention.

We need at the same time to build the connection between old data sources and new data sources. And this needs to not take as long as in the past to connect data. Delivery needs to happen faster so business problems can be recognized and solved more quickly.  Users need to get access to data when and where they need it.

With data management fixed, data intelligence needs to provide business users the ability to make sense out of things they find in the data. Business users need as well to be able to search and find data. They, also, need self-service so they can combine existing and new unstructured data sources to test data interrelationship hypothesis. This means the ability to assemble data and put it together and do it from different sources at different times. Simply put this is about data orchestration without any preconceived process. And lastly, business users need the intelligence to automatically sense and respond to changes as new data is collecting.

Tiny insights can yield huge outputs

payoffs

Obviously, there is a cost to solving our information age issues, but it is important to remember what Toffler says. “Tiny insights can yield huge outputs”. In other words, the payoff is huge for shaking off the shackles of our early information age business architecture. And those that do this will increasingly have the “right to win” against their competitors as they use information to wring every last drop of value from their business processes.

Related links
Solution Brief: The Intelligent Data Platform
Related Blogs

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, CIO, Data Integration, Data Quality | Tagged , , , , , , | Leave a comment

3 Ways to Sell Data Integration Internally

3 Ways to Sell Data Integration Internally

Selling data integration internally

So, you need to grab some budget for a data integration project, but know one understands what data integration is, what business problems it solves, and it’s difficult to explain without a white board and a lot of time.  I’ve been there.

I’ve “sold” data integration as a concept for the last 20 years.  Let me tell you, it’s challenging to define the benefits to those who don’t work with this technology every day.  That said, most of the complaints I hear about enterprise IT are around the lack of data integration, and thus the inefficiencies that go along with that lack, such as re-keying data, data quality issues, lack of automation across systems, and so forth.

Considering that most of you will sell data integration to your peers and leadership, I’ve come up with 3 proven ways to sell data integration internally.

First, focus on the business problems.  Use real world examples from your own business.  It’s not tough to find any number of cases where the data was just not there to make core operational decisions that could have avoided some huge mistakes that proved costly to the company.  Or, more likely, there are things like ineffective inventory management that has no way to understand when orders need to be place.  Or, there’s the go-to standard: No single definition of what a “customer” or a “sale” is amongst the systems that support the business.  That one is like back pain, everyone has it at some point.

Second, define the business case in practical terms with examples.  Once you define the business problems that exist due to lack of a sound data integration strategy and technologies, it’s time to put money behind those numbers.  Those in IT have a tendency to either way overstate, or way understate the amount of money that’s being wasted and thus could be saved by using data integration approaches and technology.  So, provide practical numbers that you can back-up with existing data.

Finally, focus on a phased approach to implementing your data integration solution.  The “Big Bang Theory” is a great way to define the beginning of the universe, but it’s not the way you want to define the rollout of your data integration technology.  Define a workable plan that moves from one small grouping of systems and databases to another, over time, and with a reasonable amount of resources and technology.  You do this to remove risk from the effort, as well as manage costs, and insure that you can dial lessons learned back into the efforts.  I would rather roll out data integration within an enterprises using small teams and more problem domains, than attempt to do everything within a few years.

The reality is that data integration is no longer optional for enterprises these days.  It’s required for so many reasons, from data sharing, information visibility, compliance, security, automation…the list goes on and on.  IT needs to take point on this effort.  Selling data integration internally is the first and most important step.  Go get ‘em.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Integration Platform, Data Quality, Data Security | Tagged | Leave a comment

Scary Times For Data Security

Scary Times For Data Security

Scary Times For Data Security

These are scary times we live in when it comes to data security. And the times are even scarier for today’s retailers, government agencies, financial institutions, and healthcare organizations. The internet has become a battlefield. Criminals are looking to steal trade secrets and personal data for financial gain. Terrorists seek to steal data for political gain. Both are after your Personally Identifiable Information, like your name, account numbers, social security number, date of birth, ID’s and passwords.

How are they accomplishing this?  A new generation of hackers has learned to reverse engineer popular software programs (e.g. Windows, Outlook Java, etc.) in order to find so called “holes”. Once those holes are exploited, the hackers develop “bugs” that infiltrate computer systems, search for sensitive data and return it to the bad guys. These bugs are then sold in the black market to the highest bidder. When successful, these hackers can wreak havoc across the globe.

I recently read a Time Magazine article titled “World War Zero: How Hackers Fight to Steal Your Secrets.” The article discussed a new generation of software companies made up of former hackers. These firms help other software companies by identifying potential security holes, before they can be used in malicious exploits.

This constant battle between good (data and software security firms) and bad (smart, young, programmers looking to make a quick/big buck) is happening every day. Unfortunately, the average consumer (you and I) are the innocent victims of this crazy and costly war. As a consumer in today’s digital and data-centric age, I worry when I see these headlines of ongoing data breaches from the Targets of the world to my local bank down the street. I wonder not “if” but “when” I will become the next victim.  According to the Ponemon institute, the average cost to a company was $3.5 million in US dollars and 15 percent more than what it cost last year.

As a 20 year software industry veteran, I’ve worked with many firms across global financial services industry. As a result, my concerned about data security exceed those of the average consumer. Here are the reasons for this:

  1. Everything is Digital: I remember the days when ATM machines were introduced, eliminating the need to wait in long teller lines. Nowadays, most of what we do with our financial institutions is digital and online whether on our mobile devices to desktop browsers. As such every interaction and transaction is creating sensitive data that gets disbursed across tens, hundreds, sometimes thousands of databases and systems in these firms.
  2. The Big Data Phenomenon: I’m not talking about sexy next generation analytic applications that promise to provide the best answer to run your business. What I am talking about is the volume of data that is being generated and collected from the countless number of computer systems (on-premise and in the cloud) that run today’s global financial services industry.
  3. Increase use of Off-Shore and On-Shore Development: Outsourcing technology projects to offshore development firms has be leverage off shore development partners to offset their operational and technology costs. With new technology initiatives.

Now here is the hard part.  Given these trends and heightened threats, do the companies I do business with know where the data resides that they need to protect?  How do they actually protect sensitive data when using it to support new IT projects both in-house or by off-shore development partners?   You’d be amazed what the truth is. 

According to the recent Ponemon Institute study “State of Data Centric Security” that surveyed 1,587 Global IT and IT security practitioners in 16 countries:

  • Only 16 percent of the respondents believe they know where all sensitive structured data is located and a very small percentage (7 percent) know where unstructured data resides.
  • Fifty-seven percent of respondents say not knowing where the organization’s sensitive or confidential data is located keeps them up at night.
  • Only 19 percent say their organizations use centralized access control management and entitlements and 14 percent use file system and access audits.

Even worse, those surveyed said that not knowing where sensitive and confidential information resides is a serious threat and the percentage of respondents who believe it is a high priority in their organizations. Seventy-nine percent of respondents agree it is a significant security risk facing their organizations. But a much smaller percentage (51 percent) believes that securing and/or protecting data is a high priority in their organizations.

I don’t know about you but this is alarming and worrisome to me.  I think I am ready to reach out to my banker and my local retailer and let him know about my concerns and make sure they ask and communicate my concerns to the top of their organization. In today’s globally and socially connected world, news travels fast and given how hard it is to build trustful customer relationships, one would think every business from the local mall to Wall St should be asking if they are doing what they need to identify and protect their number one digital asset – Their data.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Privacy, Data Quality, Data Services, Data Warehousing | Tagged , , , | Leave a comment

The Information Difference Pegs Informatica as a Top MDM Vendor

Information Difference MDM Diagram v2This blog post feels a little bit like bragging… and OK, I guess it is pretty self-congratulatory to announce that this year, Informatica was again chosen as a leader in MDM and PIM by The Information Difference. As you may know, The Information Difference is an independent research firm that specializes in the MDM industry and each year surveys, analyzes and ranks MDM and PIM providers and customers around the world. This year, like last year, The Information Difference named Informatica tops in the space.

Why do I feel especially chuffed about this?  Because of our customers.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , , , , | Leave a comment

How Much is Poorly Managed Supplier Information Costing Your Business?

Supplier Information“Inaccurate, inconsistent and disconnected supplier information prohibits us from doing accurate supplier spend analysis, leveraging discounts, comparing and choosing the best prices, and enforcing corporate standards.”

This is quotation from a manufacturing company executive. It illustrates the negative impact that poorly managed supplier information can have on a company’s ability to cut costs and achieve revenue targets.

Many supply chain and procurement teams at large companies struggle to see the total relationship they have with suppliers across product lines, business units and regions. Why? Supplier information is scattered across dozens or hundreds of Enterprise Resource Planning (ERP) and Accounts Payable (AP) applications. Too much valuable time is spent manually reconciling inaccurate, inconsistent and disconnected supplier information in an effort to see the big picture. All this manual effort results in back office administrative costs that are higher than they should be.

Do these quotations from supply chain leaders and their teams sound familiar?

  • “We have 500,000 suppliers. 15-20% of our supplier records are duplicates. 5% are inaccurate.”
  • I get 100 e-mails a day questioning which supplier to use.”
  • “To consolidate vendor reporting for a single supplier between divisions is really just a guess.”
  • “Every year 1099 tax mailings get returned to us because of invalid addresses, and we play a lot of Schedule B fines to the IRS.”
  • “Two years ago we spent a significant amount of time and money cleansing supplier data. Now we are back where we started.”
Webinar, Supercharge Your Supply Chain Apps with Better Supplier Information

Join us for a Webinar to find out how to supercharge your supply chain applications with clean, consistent and connected supplier information

Please join me and Naveen Sharma, Director of the Master Data Management (MDM) Practice at Cognizant for a Webinar, Supercharge Your Supply Chain Applications with Better Supplier Information, on Tuesday, July 29th at 11 am PT.

During the Webinar, we’ll explain how better managing supplier information can help you achieve the following goals:

  1. Accelerate supplier onboarding
  2. Mitiate the risk of supply disruption
  3. Better manage supplier performance
  4. Streamline billing and payment processes
  5. Improve supplier relationship management and collaboration
  6. Make it easier to evaluate non-compliance with Service Level Agreements (SLAs)
  7. Decrease costs by negotiating favorable payment terms and SLAs

I hope you can join us for this upcoming Webinar!

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Quality, Manufacturing, Master Data Management | Tagged , , , , , , , , , , , , , , | Leave a comment

How Much is Disconnected Well Data Costing Your Business?

“Not only do we underestimate the cost for projects up to 150%, but we overestimate the revenue it will generate.” This quotation from an Energy & Petroleum (E&P) company executive illustrates the negative impact of inaccurate, inconsistent and disconnected well data and asset data on revenue potential. 

“Operational Excellence” is a common goal of many E&P company executives pursuing higher growth targets. But, inaccurate, inconsistent and disconnected well data and asset data may be holding them back. It obscures the complete picture of the well information lifecycle, making it difficult to maximize production efficiency, reduce Non-Productive Time (NPT), streamline the oilfield supply chain, calculate well by-well profitability,  and mitigate risk.

Well data expert, Stephanie Wilkin shares details about the award-winning collaboration between Noah Consulting and Devon Energy.

Well data expert, Stephanie Wilkin shares details about the award-winning collaboration between Noah Consulting and Devon Energy.

To explain how E&P companies can better manage well data and asset data, we hosted a webinar, “Attention E&P Executives: Streamlining the Well Information Lifecycle.” Our well data experts Stephanie Wilkin, Senior Principal Consultant at Noah Consulting, and Stephan Zoder, Director of Value Engineering at Informatica shared some advice. E&P companies should reevaluate “throwing more bodies at a data cleanup project twice a year.” This approach does not support the pursuit of operational excellence.

In this interview, Stephanie shares details about the award-winning collaboration between Noah Consulting and Devon Energy to create a single trusted source of well data, which is standardized and mastered.

Q. Congratulations on winning the 2014 Innovation Award, Stephanie!
A. Thanks Jakki. It was really exciting working with Devon Energy. Together we put the technology and processes in place to manage and master well data in a central location and share it with downstream systems on an ongoing basis. We were proud to win the 2014 Innovation Award for Best Enterprise Data Platform.

Q. What was the business need for mastering well data?
A. As E&P companies grow so do their needs for business-critical well data. All departments need clean, consistent and connected well data to fuel their applications. We implemented a master data management (MDM) solution for well data with the goals of improving information management, business productivity, organizational efficiency, and reporting.

Q. How long did it take to implement the MDM solution for well data?
A. The Devon Energy project kicked off in May of 2012. Within five months we built the complete solution from gathering business requirements to development and testing.

Q. What were the steps in implementing the MDM solution?
A: The first and most important step was securing buy-in on a common definition for master well data or Unique Well Identifier (UWI). The key was to create a definition that would meet the needs of various business functions. Then we built the well master, which would be consistent across various systems, such as G&G, Drilling, Production, Finance, etc. We used the Professional Petroleum Data Management Association (PPDM) data model and created more than 70 unique attributes for the well, including Lahee Class, Fluid Direction, Trajectory, Role and Business Interest.

As part of the original go-live, we had three source systems of well data and two target systems connected to the MDM solution. Over the course of the next year, we added three additional source systems and four additional target systems. We did a cross-system analysis to make sure every department has the right wells and the right data about those wells. Now the company uses MDM as the single trusted source of well data, which is standardized and mastered, to do analysis and build reports.

Q. What’s been the traditional approach for managing well data?
A. Typically when a new well is created, employees spend time entering well data into their own systems. For example, one person enters well data into the G&G application. Another person enters the same well data into the Drilling application. A third person enters the same well data into the Finance application. According to statistics, it takes about 30 minutes to enter wells into a particular financial application.

So imagine if you need to add 500 new wells to your systems. This is common after a merger or acquisition. That translates to roughly 250 hours or 6.25 weeks of employee time saved on the well create process! By automating across systems, you not only save time, you eliminate redundant data entry and possible errors in the process.

Q. That sounds like a painfully slow and error-prone process.
A. It is! But that’s only half the problem. Without a single trusted source of well data, how do you get a complete picture of your wells? When you compare the well data in the G&G system to the well data in the Drilling or Finance systems, it’s typically inconsistent and difficult to reconcile. This leads to the question, “Which one of these systems has the best version of the truth?” Employees spend too much time manually reconciling well data for reporting and decision-making.

Q. So there is a lot to be gained by better managing well data.
A. That’s right. The CFO typically loves the ROI on a master well data project. It’s a huge opportunity to save time and money, boost productivity and get more accurate reporting.

Q: What were some of the business requirements for the MDM solution?
A: We couldn’t build a solution that was narrowly focused on meeting the company’s needs today. We had to keep the future in mind. Our goal was to build a framework that was scalable and supportable as the company’s business environment changed. This allows the company to add additional data domains or attributes to the well data model at any time.

Noah Consulting's MDM Trust Framework for well data

The Noah Consulting MDM Trust Framework was used to build a single trusted source of well data

Q: Why did you choose Informatica MDM?
A: The decision to use Informatica MDM for the MDM Trust Framework came down to the following capabilities:

  • Match and Merge: With Informatica, we get a lot of flexibility. Some systems carry the API or well government ID, but some don’t. We can match and merge records differently based on the system.
  • X-References: We keep a cross-reference between all the systems. We can go back to the master well data and find out where that data came from and when. We can see where changes have occurred because Informatica MDM tracks the history and lineage.
  • Scalability: This was a key requirement. While we went live after only 5 months, we’ve been continually building out the well master based on the requiremets of the target systems.
  • Flexibility: Down the road, if we want to add an additional facet or classification to the well master, the framework allows for that.
  • Simple Integration: Instead of building point-to-point integrations, we use the hub model.

In addition to Informatica MDM, our Noah Consulting MDM Trust Framework includes Informatica PowerCenter for data integration, Informatica Data Quality for data cleansing and Informatica Data Virtualization.

Q: Can you give some examples of the business value gained by mastering well data?
A: One person said to me, “I’m so overwhelmed! We’ve never had one place to look at this well data before.” With MDM centrally managing master well data and fueling key business applications, many upstream processes can be optimized to achieve their full potential value.

People spend less time entering well data on the front end and reconciling well data on the back end. Well data is entered once and it’s automatically shared across all systems that need it. People can trust that it’s consistent across systems. Also, because the data across systems is now tied together, it provides business value they were unable to realize before, such as predictive analytics. 

Q. What’s next?
A. There’s a lot of insight that can be gained by understanding the relationships between the well, and the people, equipment and facilities associated with it. Next, we’re planning to add the operational hierarchy. For example, we’ll be able to identify which production engineer, reservoir engineer and foreman are working on a particular well.

We’ve also started gathering business requirements for equipment and facilities to be tied to each well. There’s a lot more business value on the horizon as the company streamlines their well information lifecycle and the valuable relationships around the well.

If you missed the webinar, you can watch the replay now: Attention E&P Executives: Streamlining the Well Information Lifecycle.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Integration, Data Quality, Enterprise Data Management, Master Data Management, Operational Efficiency, PowerCenter, Utilities & Energy | Tagged , , , , , , , | Leave a comment

The Ones Not Screwing Up with Compliance Will Win

A few weeks ago, a regional US bank asked me to perform some compliance and use case analysis around fixing their data management situation.  This bank prides itself on customer service and SMB focus, while using large-bank product offerings.  However, they were about a decade behind the rest of most banks in modernizing their IT infrastructure to stay operationally on top of things.

compliance

Bank Efficiency Ratio per AUM (Assets under Management), bankregdata.com

This included technologies like ESB, BPM, CRM, etc.  They also were a sub-optimal user of EDW and analytics capabilities. Having said all this; there was a commitment to change things up, which is always a needed first step to any recovery program.

THE STAKEHOLDERS

As I conducted my interviews across various departments (list below) it became very apparent that they were not suffering from data poverty (see prior post) but from lack of accessibility and use of data.

  • Compliance
  • Vendor Management & Risk
  • Commercial and Consumer Depository products
  • Credit Risk
  • HR & Compensation
  • Retail
  • Private Banking
  • Finance
  • Customer Solutions

FRESH BREEZE

This lack of use occurred across the board.  The natural reaction was to throw more bodies and more Band-Aid marts at the problem.  Users also started to operate under the assumption that it will never get better.  They just resigned themselves to mediocrity.  When some new players came into the organization from various systemically critical banks, they shook things up.

Here is a list of use cases they want to tackle:

  • The proposition of real-time offers based on customer events as simple as investment banking products for unusually high inflow of cash into a deposit account.
  • The use of all mortgage application information to understand debt/equity ratio to make relevant offers.
  • The capture of true product and customer profitability across all lines of commercial and consumer products including trust, treasury management, deposits, private banking, loans, etc.
  • The agile evaluation, creation, testing and deployment of new terms on existing and products under development by shortening the product development life cycle.
  • The reduction of wealth management advisors’ time to research clients and prospects.
  • The reduction of unclaimed use tax, insurance premiums and leases being paid on consumables, real estate and requisitions due to the incorrect status and location of the equipment.  This originated from assets no longer owned, scrapped or moved to different department, etc.
  • The more efficient reconciliation between transactional systems and finance, which often uses multiple party IDs per contract change in accounts receivable, while the operating division uses one based on a contract and its addendums.  An example would be vendor payment consolidation, to create a true supplier-spend; and thus, taking advantage of volume discounts.
  • The proactive creation of central compliance footprint (AML, 314, Suspicious Activity, CTR, etc.) allowing for quicker turnaround and fewer audit instances from MRAs (matter requiring attention).

MONEY TO BE MADE – PEOPLE TO SEE

Adding these up came to about $31 to $49 million annually in cost savings, new revenue or increased productivity for this bank with $24 billion total assets.

So now that we know there is money to be made by fixing the data of this organization, how can we realistically roll this out in an organization with many competing IT needs?

The best way to go about this is to attach any kind of data management project to a larger, business-oriented project, like CRM or EDW.  Rather than wait for these to go live without good seed data, why not feed them with better data as a key work stream within their respective project plans?

To summarize my findings I want to quote three people I interviewed.  A lady, who recently had to struggle through an OCC audit told me she believes that the banks, which can remain compliant at the lowest cost will ultimately win the end game.  Here she meant particularly tier 2 and 3 size organizations.  A gentleman from commercial banking left this statement with me, “Knowing what I know now, I would not bank with us”.  The lady from earlier also said, “We engage in spreadsheet Kung Fu”, to bring data together.

Given all this, what would you suggest?  Have you worked with an organization like this? Did you encounter any similar or different use cases in financial services institutions?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Data Quality, Financial Services, Governance, Risk and Compliance | Tagged , , , | Leave a comment