Category Archives: Business/IT Collaboration

At Valspar Data Management is Key to Controlling Purchasing Costs

Steve Jenkins, Global IT Director at Valspar

Steve Jenkins is working to improve information management maturity at Valspar

Raw materials costs are the company’s single largest expense category,” said Steve Jenkins, Global IT Director at Valspar, at MDM Day in London. “Data management technology can help us improve business process efficiency, manage sourcing risk and reduce RFQ cycle times.”

Valspar is a $4 billion global manufacturing company, which produces a portfolio of leading paint and coating brands. At the end of 2013, the 200 year old company celebrated record sales and earnings. They also completed two acquisitions. Valspar now has 10,000 employees operating in 25 countries.

As is the case for many global companies, growth creates complexity. “Valspar has multiple business units with varying purchasing practices. We source raw materials from 1,000s of vendors around the globe,” shared Steve.

“We want to achieve economies of scale in purchasing to control spending,” Steve said as he shared Valspar’s improvement objectives. “We want to build stronger relationships with our preferred vendors. Also, we want to develop internal process efficiencies to realize additional savings.”

Poorly managed vendor and raw materials data was impacting Valspar’s buying power

Data management at Valspar

“We realized our buying power was limited by the age and quality of available vendor and raw materials data.”

The Valspar team, who sharply focuses on productivity, had an “Aha” moment. “We realized our buying power was limited by the age and quality of available vendor data and raw materials data,” revealed Steve. 

The core vendor data and raw materials data that should have been the same across multiple systems wasn’t. Data was often missing or wrong. This made it difficult to calculate the total spend on raw materials. It was also hard to calculate the total cost of expedited freight of raw materials. So, employees used a manual, time-consuming and error-prone process to consolidate vendor data and raw materials data for reporting.

These data issues were getting in the way of achieving their improvement objectives. Valspar needed a data management solution.

Valspar needed a single trusted source of vendor and raw materials data

Informatica MDM supports vendor and raw materials data management at Valspar

The team chose Informatica MDM as their enterprise hub for vendors and raw materials

The team chose Informatica MDM, master data management (MDM) technology. It will be their enterprise hub for vendors and raw materials. It will manage this data centrally on an ongoing basis. With Informatica MDM, Valspar will have a single trusted source of vendor and raw materials data.

Informatica PowerCenter will access data from multiple source systems. Informatica Data Quality will profile the data before it goes into the hub. Then, after Informatica MDM does it’s magic, PowerCenter will deliver clean, consistent, connected and enriched data to target systems.

Better vendor and raw materials data management results in cost savings

Valspar Chameleon Jon

Valspar will gain benefits by fueling applications with clean, consistent, connected and enriched data

Valspar expects to gain the following business benefits:

  • Streamline the RFQ process to accelerate raw materials cost savings
  • Reduce the total number of raw materials SKUs and vendors
  • Increase productivity of staff focused on pulling and maintaining data
  • Leverage consistent global data visibly to:
    • increase leverage during contract negotiations
    • improve acquisition due diligence reviews
    • facilitate process standardization and reporting

 

Valspar’s vision is to tranform data and information into a trusted organizational assets

“Mastering vendor and raw materials data is Phase 1 of our vision to transform data and information into trusted organizational assets,” shared Steve. In Phase 2 the Valspar team will master customer data so they have immediate access to the total purchases of key global customers. In Phase 3, Valspar’s team will turn their attention to product or finished goods data.

Steve ended his presentation with some advice. “First, include your business counterparts in the process as early as possible. They need to own and drive the business case as well as the approval process. Also, master only the vendor and raw materials attributes required to realize the business benefit.”

Total Supplier Information Management eBook

Click here to download the Total Supplier Information Management eBook

Want more? Download the Total Supplier Information Management eBook. It covers:

  • Why your fragmented supplier data is holding you back
  • The cost of supplier data chaos
  • The warning signs you need to be looking for
  • How you can achieve Total Supplier Information Management

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, Data Integration, Data Quality, Manufacturing, Master Data Management, Operational Efficiency, PowerCenter, Uncategorized, Vertical | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

CFO Move to Chief Profitability Officer

30% or higher of each company’s businesses are unprofitable

cfoAccording to Jonathan Brynes at the MIT Sloan School, “the most important issue facing most managers …is making more money from their existing businesses without costly new initiatives”. In Brynes’ cross industry research, he found that 30% or higher of each company’s businesses are unprofitable. Brynes claims these business losses are offset by what are “islands of high profitability”. The root cause of this issue is asserted to be the inability of current financial and management control systems to surface profitability problems and opportunities. Why is this the case? Byrnes believes that management budgetary guidance by its very nature assumes the continuation of the status quo. For this reason, the response to management asking for a revenue increase is to increase revenues for businesses that are profitable and unprofitable. Given this, “the areas of embedded unprofitability remain embedded and largely invisible”. At the same time to be completely fair, it should be recognized that it takes significant labor to accurately and completely put together a complete picture on direct and indirect costs.

The CFO needs to become the point person on profitability issues

cfo

Byrnes believes, nevertheless, that CFOs need to become the corporate point person for surfacing profitability issues. They, in fact, should act as the leader of a new and important role, the chief profitability officer. This may seem like an odd suggestion since virtually every CFO if asked would view profitability as a core element of their job. But Byrnes believes that CFOs need to move beyond broad, departmental performance measures and build profitability management processes into their companies’ core management activities. This task requires the CFO to determine two things.

  1. Which product lines, customers, segments, and channels are unprofitable so investments can be reduced or even eliminated?
  2. Which product lines, customers, segments, and channels are the most profitable so management can determine whether to expand investments and supporting operations?

Why didn’t portfolio management solve this problem?

cfoNow as a strategy MBA, Byrnes’ suggestion leave me wondering why the analysis proposed by strategy consultants like Boston Consulting Group didn’t solve this problem a long time ago. After all portfolio analysis has at its core the notion that relative market share and growth rate will determine profitability and which businesses a firm should build share, hold share, harvest share, or divest share—i.e. reduce, eliminate, or expand investment. The truth is getting at these figures, especially profitability, is a time consuming effort.

KPMG finds 91% of CFOs are held back by financial and performance systems

KPMG

As financial and business systems have become more complex, it has become harder and harder to holistically analyze customer and product profitability because the relevant data is spread over a myriad of systems, technologies, and locations. For this reason, 91% of CFO respondents in a recent KPMG survey said that they want to improve the quality of their financial and performance insight from the data they produce. An amazing 51% of these CFOs, also, admitted that the “collection, storage, and retrieval financial and performance data at their company is primarily a manual and/or spreadsheet-based exercise”. Think about it — a majority of these CFOs teams time is spent collecting financial data rather than actively managing corporate profitability.

How do we fix things?

FixWhat is needed is a solution that allows financial teams to proactively produce trustworthy financial data from each and every financial system and then reliably combine and aggregate the data coming from multiple financial systems. Having accomplished this, the solution needs to allow financial organizations to slice and dice net profitability for product lines and customers.

This approach would not only allow financial organizations to cut their financial operational costs but more importantly drive better business profitability by surfacing profitability gaps. At the same time, it would enable financial organizations to assist business units in making more informed customer and product line investment decisions. If a product line or business is narrowly profitable and lacks a broader strategic context or ability to increase profitability by growing market share, it is a candidate for investment reduction or elimination.

Strategic CFOs need to start asking questions of their business counterparts starting with their justification for their investment strategy. Key to doing this involves consolidating reliable profitability data across customers, products, channel partners, suppliers. This would eliminate the time spent searching for and manually reconciling data in different formats across multiple systems. It should deliver ready analysis across locations, applications, channels, and departments.

Some parting thoughts

Strategic CFOs tell us they are trying to seize the opportunity “to be a business person versus a bean counting historically oriented CPA”. I believe a key element of this is seizing the opportunity to become the firm’s chief profitability officer. To do this well, CFOs need dependable data that can be sliced and diced by business dimensions. Armed with this information, CFOs can determine the most and least profitability, businesses, product lines, and customers. As well, they can come to the business table with the perspective to help guide their company’s success.

Related links
Solution Brief: The Intelligent Data Platform
Related Blogs
CFOs Discuss Their Technology Priorities
The CFO Viewpoint upon Data
How CFOs can change the conversation with their CIO?
New type of CFO represents a potent CIO ally
Competing on Analytics
The Business Case for Better Data Connectivity

Twitter: @MylesSuer

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, CIO, Data Governance, Data Quality | Tagged , , , , , | Leave a comment

CSI: “Enter Location Here”

Last time I talked about how benchmark data can be used in IT and business use cases to illustrate the financial value of data management technologies.  This time, let’s look at additional use cases, and at how to philosophically interpret the findings.

ROI interpretation

We have all philosophies covered

So here are some additional areas of investigation for justifying a data quality based data management initiative:

  • Compliance or any audits data and report preparation and rebuttal  (FTE cost as above)
  • Excess insurance premiums on incorrect asset or party information
  • Excess tax payments due to incorrect asset configuration or location
  • Excess travel or idle time between jobs due to incorrect location information
  • Excess equipment downtime (not revenue generating) or MTTR due to incorrect asset profile or misaligned reference data not triggering timely repairs
  • Equipment location or ownership data incorrect splitting service cost or revenues incorrectly
  • Party relationship data not tied together creating duplicate contacts or less relevant offers and lower response rates
  • Lower than industry average cross-sell conversion ratio due to inability to match and link departmental customer records and underlying transactions and expose them to all POS channels
  • Lower than industry average customer retention rate due to lack of full client transactional profile across channels or product lines to improve service experience or apply discounts
  • Low annual supplier discounts due to incorrect or missing alternate product data or aggregated channel purchase data

I could go on forever, but allow me to touch on a sensitive topic – fines. Fines, or performance penalties by private or government entities, only make sense to bake into your analysis if they happen repeatedly in fairly predictable intervals and are “relatively” small per incidence.  They should be treated like M&A activity. Nobody will buy into cost savings in the gazillions if a transaction only happens once every ten years. That’s like building a business case for a lottery win or a life insurance payout with a sample size of a family.  Sure, if it happens you just made the case but will it happen…soon?

Use benchmarks and ranges wisely but don’t over-think the exercise either.  It will become paralysis by analysis.  If you want to make it super-scientific, hire an expensive consulting firm for a 3 month $250,000 to $500,000 engagement and have every staffer spend a few days with them away from their day job to make you feel 10% better about the numbers.  Was that worth half a million dollars just in 3rd party cost?  You be the judge.

In the end, you are trying to find out and position if a technology will fix a $50,000, $5 million or $50 million problem.  You are also trying to gauge where key areas of improvement are in terms of value and correlate the associated cost (higher value normally equals higher cost due to higher complexity) and risk.  After all, who wants to stand before a budget committee, prophesy massive savings in one area and then fail because it would have been smarter to start with something simpler and quicker win to build upon?

The secret sauce to avoiding this consulting expense and risk is a natural curiosity, willingness to do the legwork of finding industry benchmark data, knowing what goes into them (process versus data improvement capabilities) to avoid inappropriate extrapolation and using sensitivity analysis to hedge your bets.  Moreover, trust an (internal?) expert to indicate wider implications and trade-offs.  Most importantly, you have to be a communicator willing to talk to many folks on the business side and have criminal interrogation qualities, not unlike in your run-of-the-mill crime show.  Some folks just don’t want to talk, often because they have ulterior motives (protecting their legacy investment or process) or hiding skeletons in the closet (recent bad performance).  In this case, find more amenable people to quiz or pry the information out of these tough nuts, if you can.

CSI: "Enter Location Here"

CSI: “Enter Location Here”

Lastly; if you find ROI numbers, which appear astronomical at first, remember that leverage is a key factor.  If a technical capability touches one application (credit risk scoring engine), one process (quotation), one type of transaction (talent management self-service), a limited set of people (procurement), the ROI will be lower than a technology touching multiple of each of the aforementioned.  If your business model drives thousands of high-value (thousands of dollars) transactions versus ten twenty-million dollar ones or twenty-million one-dollar ones, your ROI will be higher.  After all, consider this; retail e-mail marketing campaigns average an ROI of 578% (softwareprojects.com) and this with really bad data.   Imagine what improved data can do just on that front.

I found massive differences between what improved asset data can deliver in a petrochemical or utility company versus product data in a fashion retailer or customer (loyalty) data in a hospitality chain.   The assertion of cum hoc ergo propter hoc is a key assumption how technology delivers financial value.  As long as the business folks agree or can fence in the relationship, you are on the right path.

What’s your best and worst job to justify someone giving you money to invest?  Share that story.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Quality, Governance, Risk and Compliance, Master Data Management, Mergers and Acquisitions | Tagged , , , | Leave a comment

The CFO Viewpoint upon Data

According to the Financial Executives Institute, CFOs say their second highest priority this year is to harness business intelligence and big data. Their highest priority is to improve cash flow and working capital efficiency and effectiveness. This means CFOs highest two priorities are centered around data. At roughly the same time, KPMG has found in their survey of CFOs that 91% want to improve the quality of their financial and performance insight obtained from the data that they produce. Even more amazing 51% of CFO admitted that “collecting, storing, and retrieving financial and performance data at their company is primarily accomplished through a manual and/or spreadsheet-based exercise”. From our interviews of CFOs, we believe this number is much higher.

automationDigitization increasing but automation is limited for the finance department

Your question at this point—if you are not a CFO—should be how can this be the case? After all strategy consultants like Booz and Company, actively measure the degree of digitization and automation taking place in businesses by industry and these numbers year after year have shown a strong upward bias. How can the finance organization be digitized for data collection but still largely manual in its processes for putting together the figures that management and the market needs?
 
 
TrustCFOs do not trust their data

In our interviews of CFOs, one CFO answered this question bluntly by saying “If the systems suck, then you cannot trust the numbers when you get them.” And this reality truly limits CFOs in how they respond to their top priorities. Things like management of the P&L, Expense Management, Compliance, and Regulatory all are impacted by the CFOs data problem. Instead of doing a better job at these issues, CFOs and their teams remain largely focused on “getting the numbers right”. And even worse, the answering of business questions like how much revenue is this customer providing or how profitable this customer is, involves manual pulls of data today from more than one system. And yes, similar data issues exist in financial services organizations which close the books nightly.

ProblemCFOs share openly their data problem

The CFOs, that I have talked to, admit without hesitation that data is a big issue for them. These CFOs say that they worry about data from the source and the ability to do meaningful financial or managerial analysis. They say they need to rely on data in order to report but as important they need it to help drive synergies across businesses. This matters because CFOs say they want to move from being just “bean counters” to being participants in the strategy of their enterprises.

To succeed, CFOs say that they need timely, accurate data. However, they are the first to discuss how disparate systems get in their way. CFOs believe that making their lives easier starts with the systems that support them. What they believe is needed is real integration and consolidation of data. One CFO said what is needed this way, “we need the integration of the right systems to provide the right information so we can manage and make decisions at the right time”. CFOs clearly want to know that the accounting systems are working and reliable. At the same time, CFOs want, for example, a holistic view of customer. When asked why this isn’t a marketing activity, they say this is business issue that CFOs need to help manage. “We want to understand the customer across business units.  It is a finance objective because finance is responsible for business metrics and there are gaps in business metrics around customer. How much cross sell opportunities is the business as a whole pursuing?”

Chief Profitability Officers?                                               

Jonathan Brynes at the MIT Sloan School confirms this viewpoint is becoming a larger trend when he suggests that CFOs need to take on the function of “Chief Profitability Officers”. With this hat, CFOs, in his view, need to determine which product lines, customers, segments, and channels are the most and the least profitable. Once again, this requires that CFOs tackle their data problem to have relevant, holistic information.

CIOs remain responsible for data delivery

Data DeliveryCFOs believe that CIOs remain responsible for how data is delivered. CFOs, say that they need to lead in creating validated data and reports. Clearly, if data delivery remains a manual process, then the CFO will be severely limited in their ability to adequately support their new and strategic charter. Yet CFOs when asked if they see data as a competitive advantage say that “every CFO would view data done well as a competitive advantage”. Some CFOs even suggest that data is the last competitive advantage. This fits really well with the view of Davenport in “Competing on Analytics”. The question is how soon will CIOs and CFOs work together to get the finance organization out of its mess of manually massaging and consolidating financial and business data.

Related links

Solution Brief: The Intelligent Data Platform

Related Blogs

How CFOs can change the conversation with their CIO?

New type of CFO represents a potent CIO ally

Competing on Analytics

The Business Case for Better Data Connectivity

Is Big Data Destined To Become Small And Vertical?

What is big data and why should your business care?

Twitter: @MylesSuer

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, Enterprise Data Management, Governance, Risk and Compliance | Tagged , , , , , | 2 Comments

Informatica’s Inclusion on the “R&D All-Stars: CNBC RQ 50″ Was No Accident

CNBC RQ 50Earlier this month, CNBC.com published its first ever R&D All-Stars: CNBC RQ 50, ranking the top 50 public companies by return on research and development investment. Coming in the top ten, and the first pure software play was Informatica, mentioned as first among great software companies like Google, Amazon, and Salesforce. CNBC.com is referencing a companion article by David Spiegel – Boring stocks that generate R&D heat-and profits. The article made an excellent point: When R&D productivity links R&D spending to corporate revenue growth and market value, it is a better gauge of the productivity of that spending.

Unlike other R&D lists or rankings, the RQ50 was less concerned with pure dollars than what the company actually did with it. The RQ50 measures increase in revenue as it relates to increase in R&D expenditures. Its methodology was provided by Professor Anne Marie Knott, of Washington University in St. Louis, who tracks and studies corporate R&D investment, and has found that the companies that regularly turn R&D into income typically place innovation at the forefront of the corporate mission and have a structure and culture that support it.

Informatica is on the list because its revenue gains between 2006 and 2013 correlate directly with its increased R&D investment over the same period. While the list specifically cites the 2013 figures, the result is due to a systematic and long-term strategic initiative to place innovation at the core of our business plan.

Informatica has innovated broadly across its product spectrum. I can personally speak to one area where it has invested smartly and made significant gains – Informatica Cloud. Informatica decided to make its initial investment in the cloud in 2006 and was early in the market with regards to cloud integration. In fact, back in 2006, very few of today’s well-known SaaS companies were even publicly traded. The most popular SaaS app today, Salesforce.com had revenues of just $309 million in FY2006 compared with over $4 billion in FY2014. Amazon EC2, one of the core services of Amazon Web Services (AWS) itself had only been announced in that year. Apart from EC2, Amazon only had six other services in 2006. In 2014, that number has ballooned to over 30.

In his article about the RQ50, Spiegel talks about how the companies on the list aren’t just listening to what customers want or need now. They’re also challenging themselves to come up with the things the market can use two or ten years into the future. In 2006, Informatica took the same approach with its initial investment in cloud integration.

For us, it started with an observation and then a commitment to the belief that we were at an inflection point with the cloud, and on the cusp of what was going to become a true megatrend that represented a huge opportunity for the integration industry. Informatica assembled a small, agile group made up of strong leaders with varying skills and experience pulled from different areas—sales, engineering, and product management — throughout the company. It also meant throwing away the traditional measures of success and identifying new and more appropriate metrics to benchmark our progress. And finally, it included partnering with like-minded companies like Salesforce and NetSuite initially, and later on with Amazon, and taking our core strength – on-premise data integration technology – and pivoting it into a new direction.

The result was the first iteration of the Informatica Cloud. It leveraged the fruit of our R&D investment – the Vibe Virtual Data Machine – to provide SaaS administrators and line of business IT with the ability to perform lightweight cloud integrations between their on-premise and cloud applications without the involvement of an integration developer. Subsequent work and innovation have continued along the same path, adding tools like drag-and-drop design interfaces and mapping wizards, with the end goal of giving line-of-business (LOB) IT, cloud application administrators and citizen integrators a single platform to perform all the integration patterns they require, on their timeline. Informatica Cloud has consistently delivered 2-3 releases every year, and is now already on Release 20. From originally starting out with Data Replication for Salesforce, the Cloud team added bigger and better functionality such as developing connectivity for over 100 applications and data protocols, opening up our integration services through REST APIs, going beyond integration by incorporating cloud master data management and cloud test data management capabilities, and most recently announcing optimized batch and real-time cloud integration under a single unified platform.

And it goes on to this day, with investments in new innovations and directions, like Informatica Project Springbok. With Project Springbok, we’re duplicating what we did with Informatica Cloud but this time for citizen integrators. We’re using our vast experiences working with customers and building cutting-edge technology IP over the last 20 years and enabling citizen integrators to harmonize data faster for better insights (and hopefully, less late nights writing spreadsheet formulas). What we do after Project Springbok is anyone’s guess, but wherever that is, it will be sure to put us on lists like the RQ 50 for some time to come.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Cloud | Tagged , , , | Leave a comment

What’s In A Name?

Naming ConventionsSometimes, the choice of a name has unexpected consequences. Often these consequences aren’t fair. But they exist, nonetheless. For an example of this, consider the well-known study by the National Bureau of Economic Research study that compares the hiring prospects of candidates with identical resumes, but different names. During the study, titled a “Field Experiment on Labor Market Discrimination,” employers were found to be more likely to reply candidates with popular, traditionally Caucasian names than to candidates with either unique, eclectic names or with traditionally African-American names. Though these biases are clearly unfair to the candidates, they do illustrate a key point: One’s choice when naming something can come with perceptions that influence outcomes.

For an example from the IT world, consider my recent engagement at a regional retail bank. In this engagement, half of the meeting time was consumed by IT and business leaders debating how to label their Master Data Management (MDM) Initiative.  Consider these excerpts:

  • Should we even call it MDM? Answer: No. Why? Because nobody on the business side will understand what that means. Also, as we just implemented a Data Warehouse/Mart last year and we are in the middle of our new CRM roll-out, everybody in business and retail banking will assume their data is already mastered in both of these.  On a side note; telcos understand MDM as Mobile Device Management.
  • Should we call it “Enterprise Data Master’? Answer: No. Why? Because unless you roll out all data domains and all functionality (standardization, matching, governance, hierarchy management, etc.) to the whole enterprise, you cannot.  And doing so is a bad idea as it is with every IT project.  Boiling the ocean and going live with a big bang is high cost, high risk and given shifting organizational strategies and leadership, quick successes are needed to sustain the momentum.
  • Should we call it “Data Warehouse – Release 2”? Answer: No. Why? Because it is neither a data warehouse, nor a version 2 of one.  It is a backbone component required to manage a key organizational ingredient – data –in a way that it becomes useful to many use cases, processes, applications and people, not just analytics, although it is often the starting block.  Data warehouses have neither been conceived nor designed to facilitate data quality (they assume it is there already) nor are they designed for real time interactions.  Did anybody ask if ETL is “Pneumatic Tubes – Version 2”?
  • Should we call it “CRM Plus”? Answer: No. Why? Because it has never intended or designed to handle the transactional volume and attribution breadth of high volume use cases, which are driven by complex business processes. Also, if it were a CRM system, it would have a more intricate UI capability beyond comparatively simple data governance workflows and UIs.

Consider this; any data quality solution like MDM, makes any existing workflow or application better at what it does best: manage customer interactions, create orders, generate correct invoices, etc.  To quote a colleague “we are the BASF of software”.  Few people understand what a chemical looks like or does but it makes a plastic container sturdy, transparent, flexible and light.

I also explained hierarchy management in a similar way. Consider it the LinkedIn network of your company, which you can attach every interaction and transaction to.  I can see one view, people in my network see a different one and LinkedIn has probably the most comprehensive view but we are all looking at the same core data and structures ultimately.

So let’s call the “use” of your MDM “Mr. Clean”, aka Meister Proper, because it keeps everything clean.

While naming is definitely a critical point to consider given the expectations, fears and reservations that come with MDM and the underlying change management, it was hilarious to see how important it suddenly was.  However, it was puzzling to me (maybe a naïve perspective) why mostly recent IT hires had to categorize everything into new, unique functional boxes, while business and legacy IT people wanted to re-purpose existing boxes.  I guess, recent IT used their approach to showcase that they were familiar with new technologies and techniques, which was likely a reason for their employment.  Business leaders, often with the exception of highly accomplished and well regarded ones, as well as legacy IT leaders, needed to reassure continuity and no threat of disruption or change.  Moreover, they also needed to justify their prior software investments’ value proposition.

Aside from company financial performance and regulatory screw-ups, legions of careers will be decide if, how and how successful this initiative will be.

Naming a new car model for a 100,000 production run or a shampoo for worldwide sales could not face much more scrutiny.  Software vendors give their future releases internal names of cities like Atlanta or famous people like Socrates instead of descriptive terms like “Gamification User Interface Release” or “Unstructured Content Miner”. This may be a good avenue for banks and retailers to explore.  It would avoid the expectation pitfalls associated with names like “Customer Success Data Mart”, “Enterprise Data Factory”, “Data Aggregator” or “Central Property Repository”.  In reality, there will be many applications, which can claim bits and pieces of the same data, data volume or functionality.  Who will make the call on which one will be renamed or replaced to explain to the various consumers what happened to it and why.

You can surely name any customer facing app something more descriptive like “Payment Central” or “Customer Success Point” but the reason why you can do this is that the user will only have one or maybe two points to interface with the organization. Internal data consumers will interact many more repositories.  Similarly, I guess this is all the reason why I call my kids by their first name and strangers label them by their full name, “Junior”, “Butter Fingers” or “The Fast Runner”.

I would love to hear some other good reasons why naming conventions should be more scrutinized.  Maybe you have some guidance on what should and should not be done and the reasons for it?

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, CIO, Data Quality, Master Data Management | Tagged , , | Leave a comment

The King of Benchmarks Rules the Realm of Averages

A mid-sized insurer recently approached our team for help. They wanted to understand how they fell short in making their case to their executives. Specifically, they proposed that fixing their customer data was key to supporting the executive team’s highly aggressive 3-year growth plan. (This plan was 3x today’s revenue).  Given this core organizational mission – aside from being a warm and fuzzy place to work supporting its local community – the slam dunk solution to help here is simple.  Just reducing the data migration effort around the next acquisition or avoiding the ritual annual, one-off data clean-up project already pays for any tool set enhancing data acquisitions, integration and hygiene.  Will it get you to 3x today’s revenue?  It probably won’t.  What will help are the following:

The King of Benchmarks Rules the Realm of Averages

Making the Math Work (courtesy of Scott Adams)

Hard cost avoidance via software maintenance or consulting elimination is the easy part of the exercise. That is why CFOs love it and focus so much on it.  It is easy to grasp and immediate (aka next quarter).

Soft cost reduction, like staff redundancies are a bit harder.  Despite them being viable, in my experience very few decision makers want work on a business case to lay off staff.  My team had one so far. They look at these savings as freed up capacity, which can be re-deployed more productively.   Productivity is also a bit harder to quantify as you typically have to understand how data travels and gets worked on between departments.

However, revenue effects are even harder and esoteric to many people as they include projections.  They are often considered “soft” benefits, although they outweigh the other areas by 2-3 times in terms of impact.  Ultimately, every organization runs their strategy based on projections (see the insurer in my first paragraph).

The hardest to quantify is risk. Not only is it based on projections – often from a third party (Moody’s, TransUnion, etc.) – but few people understand it. More often, clients don’t even accept you investigating this area if you don’t have an advanced degree in insurance math. Nevertheless, risk can generate extra “soft” cost avoidance (beefing up reserve account balance creating opportunity cost) but also revenue (realizing a risk premium previously ignored).  Often risk profiles change due to relationships, which can be links to new “horizontal” information (transactional attributes) or vertical (hierarchical) from parent-child relationships of an entity and the parent’s or children’s transactions.

Given the above, my initial advice to the insurer would be to look at the heartache of their last acquisition, use a benchmark for IT productivity from improved data management capabilities (typically 20-26% – Yankee Group) and there you go.  This is just the IT side so consider increasing the upper range by 1.4x (Harvard Business School) as every attribute change (last mobile view date) requires additional meetings on a manager, director and VP level.  These people’s time gets increasingly more expensive.  You could also use Aberdeen’s benchmark of 13hrs per average master data attribute fix instead.

You can also look at productivity areas, which are typically overly measured.  Let’s assume a call center rep spends 20% of the average call time of 12 minutes (depending on the call type – account or bill inquiry, dispute, etc.) understanding

  • Who the customer is
  • What he bought online and in-store
  • If he tried to resolve his issue on the website or store
  • How he uses equipment
  • What he cares about
  • If he prefers call backs, SMS or email confirmations
  • His response rate to offers
  • His/her value to the company

If he spends these 20% of every call stringing together insights from five applications and twelve screens instead of one frame in seconds, which is the same information in every application he touches, you just freed up 20% worth of his hourly compensation.

Then look at the software, hardware, maintenance and ongoing management of the likely customer record sources (pick the worst and best quality one based on your current understanding), which will end up in a centrally governed instance.  Per DAMA, every duplicate record will cost you between $0.45 (party) and $0.85 (product) per transaction (edit touch).  At the very least each record will be touched once a year (likely 3-5 times), so multiply your duplicated record count by that and you have your savings from just de-duplication.  You can also use Aberdeen’s benchmark of 71 serious errors per 1,000 records, meaning the chance of transactional failure and required effort (% of one or more FTE’s daily workday) to fix is high.  If this does not work for you, run a data profile with one of the many tools out there.

If the sign says it - do it!

If the sign says it – do it!

If standardization of records (zip codes, billing codes, currency, etc.) is the problem, ask your business partner how many customer contacts (calls, mailing, emails, orders, invoices or account statements) fail outright and/or require validation because of these attributes.  Once again, if you apply the productivity gains mentioned earlier, there are you savings.  If you look at the number of orders that get delayed in form of payment or revenue recognition and the average order amount by a week or a month, you were just able to quantify how much profit (multiply by operating margin) you would be able to pull into the current financial year from the next one.

The same is true for speeding up the introduction or a new product or a change to it generating profits earlier.  Note that looking at the time value of funds realized earlier is too small in most instances especially in the current interest environment.

If emails bounce back or snail mail gets returned (no such address, no such name at this address, no such domain, no such user at this domain), e(mail) verification tools can help reduce the bounces. If every mail piece (forget email due to the miniscule cost) costs $1.25 – and this will vary by type of mailing (catalog, promotion post card, statement letter), incorrect or incomplete records are wasted cost.  If you can, use fully loaded print cost incl. 3rd party data prep and returns handling.  You will never capture all cost inputs but take a conservative stab.

If it was an offer, reduced bounces should also improve your response rate (also true for email now). Prospect mail response rates are typically around 1.2% (Direct Marketing Association), whereas phone response rates are around 8.2%.  If you know that your current response rate is half that (for argument sake) and you send out 100,000 emails of which 1.3% (Silverpop) have customer data issues, then fixing 81-93% of them (our experience) will drop the bounce rate to under 0.3% meaning more emails will arrive/be relevant. This in turn multiplied by a standard conversion rate (MarketingSherpa) of 3% (industry and channel specific) and average order (your data) multiplied by operating margin gets you a   benefit value for revenue.

If product data and inventory carrying cost or supplier spend are your issue, find out how many supplier shipments you receive every month, the average cost of a part (or cost range), apply the Aberdeen master data failure rate (71 in 1,000) to use cases around lack of or incorrect supersession or alternate part data, to assess the value of a single shipment’s overspend.  You can also just use the ending inventory amount from the 10-k report and apply 3-10% improvement (Aberdeen) in a top-down approach. Alternatively, apply 3.2-4.9% to your annual supplier spend (KPMG).

You could also investigate the expediting or return cost of shipments in a period due to incorrectly aggregated customer forecasts, wrong or incomplete product information or wrong shipment instructions in a product or location profile. Apply Aberdeen’s 5% improvement rate and there you go.

Consider that a North American utility told us that just fixing their 200 Tier1 suppliers’ product information achieved an increase in discounts from $14 to $120 million. They also found that fixing one basic out of sixty attributes in one part category saves them over $200,000 annually.

So what ROI percentages would you find tolerable or justifiable for, say an EDW project, a CRM project, a new claims system, etc.? What would the annual savings or new revenue be that you were comfortable with?  What was the craziest improvement you have seen coming to fruition, which nobody expected?

Next time, I will add some more “use cases” to the list and look at some philosophical implications of averages.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Migration, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , | Leave a comment

How Is The CIO Role Starting To Change?

When you talk to CIOs today, you strongly get the feeling that that the CIO role is about to change. One CIO said to me that the CIO is the midst of “a sea state change”. Recently, I got to talk with a half a dozen CIOs on what is most important to their role and how they see the role as a whole changing over the next few years. Their answers were thought provoking and worthy of broader discussion.

business alignmentCIOs need to be skilled at business alignment

CIOs say it is becoming less and less common for the CIO to come up through the technical ranks. One CIO said it used to common for one to become a CIO after being a CTO but this has changed. More and more business people are becoming CIOs. The CIO role today is “more about understanding the business than to understanding technology. It is more about business alignment than technology alignment”. This need for better business alignment led one CIO to say that consulting is a great starting point for a future IT leader. Consulting provides a future IT leader with the following: 1) vertical expertise; 2) technical expertise; and 3) systems integration expertise. Another CIO suggested that the CIO role sometimes is being used these days as a rotational position for a future business leader. “It provides these leaders with technical skills that they will need in their career.” Regardless, it is increasingly clear that business expertise versus technical expertise is much more important.

How will the CIO role change?

changeCIOs, in general, believe that their role will change in the next five years. One CIO insisted that CIOs are going to continue to be incredibly important to their enterprises. However, he said that CIOs have the opportunity to create analytics that guide the business in finding value. For CIOs to do this, they need to connect the dots between transactional systems, BI, and the planning systems. They need to convert data into action. This means they need to enable the business to be proactive and cut the time it takes for them to execute. CIOs need in his view to enable their enterprises to generate differentiated value than competitors.

Another CIO sees the CIOs becoming the orchestrator vs. the builder of business services. This CIO said that “building stuff is now really table stakes”. Cloud and loosely oriented partnerships is bringing vendor management to the forefront. Agreeing with this point of view, a third CIO says that she sees CIOs moving from an IT role into a business role. She went onto say that “CIOs need to understand the business better and be able to partner better with the business. They need to understand the role for IT better and this includes understanding their firm’s business models better”.

A final CIO suggests something even more radical.  He believes that the CIO role will disappear altogether or morph into something new. This CIO claims CIOs have the opportunity to become the chief digital officer or the COO. After all, the CIO is about implementing business processes.

clean dataFor more technical CIOs, this CIO sees them reverting into CTOs but he worries at the same time about the importance of hardware and platform issues with the increasing importance of cloud—this type of role is  going to become less and less relevant. This same CIO says that, in passing, CIOs screwed up a golden opportunity 10 years ago. At this time, CIOs one by one clawed their way to the table and separated themselves from the CFO. However, once they were at the table, they did not change their game. They continued to talk bits and bytes versus business issues. And one by one, they are being returned to the CFO to manage.

Parting Thoughts

So change is inevitable. CIOs need to change their game or be changed by external forces. So let’s start the debate right now. How do you see the CIO role changing? Express your opinion. Let’s see where you and the above CIOs agree and more importantly where you differ?

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, CIO | Tagged , , , | Leave a comment

Don’t Fire the CIO, Transform the Business

Transform the Business

Transform the Business

The headline on the Venture Beat website this weekend was Why you should fire your CIO. The point of the article was that the rest of the executive suite in most organizations is ignorant about IT issues and has abdicated responsibility to the CIO, or they build their own information solutions without sufficient competence in information management. The article suggests that firing the CIO is one way to pass accountability for information management to the business leaders since there will be no place for them to hide. They simply won’t be able to deflect the decisions to the CIO. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, CIO, Enterprise Data Management, Integration Competency Centers | Tagged , , , | Leave a comment