Tag Archives: ROI
Last time I talked about how benchmark data can be used in IT and business use cases to illustrate the financial value of data management technologies. This time, let’s look at additional use cases, and at how to philosophically interpret the findings.
So here are some additional areas of investigation for justifying a data quality based data management initiative:
- Compliance or any audits data and report preparation and rebuttal (FTE cost as above)
- Excess insurance premiums on incorrect asset or party information
- Excess tax payments due to incorrect asset configuration or location
- Excess travel or idle time between jobs due to incorrect location information
- Excess equipment downtime (not revenue generating) or MTTR due to incorrect asset profile or misaligned reference data not triggering timely repairs
- Equipment location or ownership data incorrect splitting service cost or revenues incorrectly
- Party relationship data not tied together creating duplicate contacts or less relevant offers and lower response rates
- Lower than industry average cross-sell conversion ratio due to inability to match and link departmental customer records and underlying transactions and expose them to all POS channels
- Lower than industry average customer retention rate due to lack of full client transactional profile across channels or product lines to improve service experience or apply discounts
- Low annual supplier discounts due to incorrect or missing alternate product data or aggregated channel purchase data
I could go on forever, but allow me to touch on a sensitive topic – fines. Fines, or performance penalties by private or government entities, only make sense to bake into your analysis if they happen repeatedly in fairly predictable intervals and are “relatively” small per incidence. They should be treated like M&A activity. Nobody will buy into cost savings in the gazillions if a transaction only happens once every ten years. That’s like building a business case for a lottery win or a life insurance payout with a sample size of a family. Sure, if it happens you just made the case but will it happen…soon?
Use benchmarks and ranges wisely but don’t over-think the exercise either. It will become paralysis by analysis. If you want to make it super-scientific, hire an expensive consulting firm for a 3 month $250,000 to $500,000 engagement and have every staffer spend a few days with them away from their day job to make you feel 10% better about the numbers. Was that worth half a million dollars just in 3rd party cost? You be the judge.
In the end, you are trying to find out and position if a technology will fix a $50,000, $5 million or $50 million problem. You are also trying to gauge where key areas of improvement are in terms of value and correlate the associated cost (higher value normally equals higher cost due to higher complexity) and risk. After all, who wants to stand before a budget committee, prophesy massive savings in one area and then fail because it would have been smarter to start with something simpler and quicker win to build upon?
The secret sauce to avoiding this consulting expense and risk is a natural curiosity, willingness to do the legwork of finding industry benchmark data, knowing what goes into them (process versus data improvement capabilities) to avoid inappropriate extrapolation and using sensitivity analysis to hedge your bets. Moreover, trust an (internal?) expert to indicate wider implications and trade-offs. Most importantly, you have to be a communicator willing to talk to many folks on the business side and have criminal interrogation qualities, not unlike in your run-of-the-mill crime show. Some folks just don’t want to talk, often because they have ulterior motives (protecting their legacy investment or process) or hiding skeletons in the closet (recent bad performance). In this case, find more amenable people to quiz or pry the information out of these tough nuts, if you can.
Lastly; if you find ROI numbers, which appear astronomical at first, remember that leverage is a key factor. If a technical capability touches one application (credit risk scoring engine), one process (quotation), one type of transaction (talent management self-service), a limited set of people (procurement), the ROI will be lower than a technology touching multiple of each of the aforementioned. If your business model drives thousands of high-value (thousands of dollars) transactions versus ten twenty-million dollar ones or twenty-million one-dollar ones, your ROI will be higher. After all, consider this; retail e-mail marketing campaigns average an ROI of 578% (softwareprojects.com) and this with really bad data. Imagine what improved data can do just on that front.
I found massive differences between what improved asset data can deliver in a petrochemical or utility company versus product data in a fashion retailer or customer (loyalty) data in a hospitality chain. The assertion of cum hoc ergo propter hoc is a key assumption how technology delivers financial value. As long as the business folks agree or can fence in the relationship, you are on the right path.
What’s your best and worst job to justify someone giving you money to invest? Share that story.
A mid-sized insurer recently approached our team for help. They wanted to understand how they fell short in making their case to their executives. Specifically, they proposed that fixing their customer data was key to supporting the executive team’s highly aggressive 3-year growth plan. (This plan was 3x today’s revenue). Given this core organizational mission – aside from being a warm and fuzzy place to work supporting its local community – the slam dunk solution to help here is simple. Just reducing the data migration effort around the next acquisition or avoiding the ritual annual, one-off data clean-up project already pays for any tool set enhancing data acquisitions, integration and hygiene. Will it get you to 3x today’s revenue? It probably won’t. What will help are the following:
Hard cost avoidance via software maintenance or consulting elimination is the easy part of the exercise. That is why CFOs love it and focus so much on it. It is easy to grasp and immediate (aka next quarter).
Soft cost reduction, like staff redundancies are a bit harder. Despite them being viable, in my experience very few decision makers want work on a business case to lay off staff. My team had one so far. They look at these savings as freed up capacity, which can be re-deployed more productively. Productivity is also a bit harder to quantify as you typically have to understand how data travels and gets worked on between departments.
However, revenue effects are even harder and esoteric to many people as they include projections. They are often considered “soft” benefits, although they outweigh the other areas by 2-3 times in terms of impact. Ultimately, every organization runs their strategy based on projections (see the insurer in my first paragraph).
The hardest to quantify is risk. Not only is it based on projections – often from a third party (Moody’s, TransUnion, etc.) – but few people understand it. More often, clients don’t even accept you investigating this area if you don’t have an advanced degree in insurance math. Nevertheless, risk can generate extra “soft” cost avoidance (beefing up reserve account balance creating opportunity cost) but also revenue (realizing a risk premium previously ignored). Often risk profiles change due to relationships, which can be links to new “horizontal” information (transactional attributes) or vertical (hierarchical) from parent-child relationships of an entity and the parent’s or children’s transactions.
Given the above, my initial advice to the insurer would be to look at the heartache of their last acquisition, use a benchmark for IT productivity from improved data management capabilities (typically 20-26% – Yankee Group) and there you go. This is just the IT side so consider increasing the upper range by 1.4x (Harvard Business School) as every attribute change (last mobile view date) requires additional meetings on a manager, director and VP level. These people’s time gets increasingly more expensive. You could also use Aberdeen’s benchmark of 13hrs per average master data attribute fix instead.
You can also look at productivity areas, which are typically overly measured. Let’s assume a call center rep spends 20% of the average call time of 12 minutes (depending on the call type – account or bill inquiry, dispute, etc.) understanding
- Who the customer is
- What he bought online and in-store
- If he tried to resolve his issue on the website or store
- How he uses equipment
- What he cares about
- If he prefers call backs, SMS or email confirmations
- His response rate to offers
- His/her value to the company
If he spends these 20% of every call stringing together insights from five applications and twelve screens instead of one frame in seconds, which is the same information in every application he touches, you just freed up 20% worth of his hourly compensation.
Then look at the software, hardware, maintenance and ongoing management of the likely customer record sources (pick the worst and best quality one based on your current understanding), which will end up in a centrally governed instance. Per DAMA, every duplicate record will cost you between $0.45 (party) and $0.85 (product) per transaction (edit touch). At the very least each record will be touched once a year (likely 3-5 times), so multiply your duplicated record count by that and you have your savings from just de-duplication. You can also use Aberdeen’s benchmark of 71 serious errors per 1,000 records, meaning the chance of transactional failure and required effort (% of one or more FTE’s daily workday) to fix is high. If this does not work for you, run a data profile with one of the many tools out there.
If standardization of records (zip codes, billing codes, currency, etc.) is the problem, ask your business partner how many customer contacts (calls, mailing, emails, orders, invoices or account statements) fail outright and/or require validation because of these attributes. Once again, if you apply the productivity gains mentioned earlier, there are you savings. If you look at the number of orders that get delayed in form of payment or revenue recognition and the average order amount by a week or a month, you were just able to quantify how much profit (multiply by operating margin) you would be able to pull into the current financial year from the next one.
The same is true for speeding up the introduction or a new product or a change to it generating profits earlier. Note that looking at the time value of funds realized earlier is too small in most instances especially in the current interest environment.
If emails bounce back or snail mail gets returned (no such address, no such name at this address, no such domain, no such user at this domain), e(mail) verification tools can help reduce the bounces. If every mail piece (forget email due to the miniscule cost) costs $1.25 – and this will vary by type of mailing (catalog, promotion post card, statement letter), incorrect or incomplete records are wasted cost. If you can, use fully loaded print cost incl. 3rd party data prep and returns handling. You will never capture all cost inputs but take a conservative stab.
If it was an offer, reduced bounces should also improve your response rate (also true for email now). Prospect mail response rates are typically around 1.2% (Direct Marketing Association), whereas phone response rates are around 8.2%. If you know that your current response rate is half that (for argument sake) and you send out 100,000 emails of which 1.3% (Silverpop) have customer data issues, then fixing 81-93% of them (our experience) will drop the bounce rate to under 0.3% meaning more emails will arrive/be relevant. This in turn multiplied by a standard conversion rate (MarketingSherpa) of 3% (industry and channel specific) and average order (your data) multiplied by operating margin gets you a benefit value for revenue.
If product data and inventory carrying cost or supplier spend are your issue, find out how many supplier shipments you receive every month, the average cost of a part (or cost range), apply the Aberdeen master data failure rate (71 in 1,000) to use cases around lack of or incorrect supersession or alternate part data, to assess the value of a single shipment’s overspend. You can also just use the ending inventory amount from the 10-k report and apply 3-10% improvement (Aberdeen) in a top-down approach. Alternatively, apply 3.2-4.9% to your annual supplier spend (KPMG).
You could also investigate the expediting or return cost of shipments in a period due to incorrectly aggregated customer forecasts, wrong or incomplete product information or wrong shipment instructions in a product or location profile. Apply Aberdeen’s 5% improvement rate and there you go.
Consider that a North American utility told us that just fixing their 200 Tier1 suppliers’ product information achieved an increase in discounts from $14 to $120 million. They also found that fixing one basic out of sixty attributes in one part category saves them over $200,000 annually.
So what ROI percentages would you find tolerable or justifiable for, say an EDW project, a CRM project, a new claims system, etc.? What would the annual savings or new revenue be that you were comfortable with? What was the craziest improvement you have seen coming to fruition, which nobody expected?
Next time, I will add some more “use cases” to the list and look at some philosophical implications of averages.
Recently, I presented a Business Value Assessment to a client. The findings were based on a revenue-generating state government agency. Everyone at the presentation was stunned to find out how much money was left on the table by not basing their activities on transactions, which could be cleanly tied to the participating citizenry and a variety of channel partners. There was over $38 million in annual benefits left over, which included partially recovered lost revenue, cost avoidance and reduction. A higher data impact to this revenue driven business model could have prevented this.
Given the total revenue volume, this may seem small. However, after factoring in the little technology effort required to “collect and connect” data from existing transactions, it is actually extremely high.
The real challenge for this organization will be the required policy transformation to turn the organization from “data-starved” to “data-intensive”. This would eliminate strategic decisions around new products, locations and customers relying on surveys that face sampling errors, biases, etc. Additionally, surveys are often delayed, making them practically ineffective in this real-time world we live in today.
Despite no applicable legal restrictions, the leadership’s main concern was that gathering more data would erode the public’s trust and positive image of the organization.
To be clear; by “more” data being collected by this type of government agency I mean literally 10% of what any commercial retail entity has gathered on all of us for decades. This is not the next NSA revelation as any conspiracy theorist may fear.
While I respect their culturally driven self-censorship despite no legal barricades, it raises their stakeholders’ (the state’s citizenry) concern over its performance. To be clear, there would be no additional revenue for the state’s programs without more citizen data. You may believe that they already know everything about you, including your income, property value, tax information, etc. However, inter-departmental sharing of criminally-non-relevant information is legally constrained.
Another interesting finding from this evaluation was that they had no sense of conversion rate from email and social media campaigns. Impressions from click-throughs as well as hard/soft bounces were more important than tracking who actually generated revenue.
This is a very market-driven organization compared to other agencies. It actually does try to measure itself like a commercial enterprise and attempts to change in order to generate additional revenue for state programs benefiting the citizenry. I can only imagine what non-revenue-generating agencies (local, state or federal) do in this respect. Is revenue-oriented thinking something the DoD, DoJ or Social Security should subscribe to?
Think tanks and political pundits are now looking at the trade-off between bringing democracy to every backyard on our globe and its long-term, budget ramifications. The DoD is looking to reduce the active component to its lowest in decades given the U.S. federal debt level.
A recent article in HBR explains that cost cutting has never sustained an organization’s growth over a longer period of time, but new revenue sources did. Is your company or government agency only looking at cost and personnel productivity?
Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations and benchmarks. While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warranty or representation of success, either express or implied, is made.
Healthcare organizations are currently engaged in major transformative initiatives. The American Recovery and Reinvestment Act of 2009 (ARRA) provided the healthcare industry incentives for the adoption and modernization of point-of-care computing solutions including electronic medical and health records (EMRs/EHRs). Funds have been allocated, and these projects are well on their way. In fact, the majority of hospitals in the US are engaged in implementing EPIC, a software platform that is essentially the ERP for healthcare.
These Cadillac systems are being deployed from scratch with very little data being ported from the old systems into the new. The result is a dearth of legacy applications running in aging hospital data centers, consuming every last penny of HIS budgets. Because the data still resides on those systems, hospital staff continues to use them making it difficult to shut down or retire.
Most of these legacy systems are not running on modern technology platforms – they run on systems such as HP Turbo Image, Intercache Mumps, and embedded proprietary databases. Finding people who know how to manage and maintain these systems is costly and risky – risky in that if data residing in those applications is subject to data retention requirements (patient records, etc.) and the data becomes inaccessible.
A different challenge for CFOs of these hospitals is the ROI on these EPIC implementations. Because these projects are multi-phased, multi-year, boards of directors are asking about the value realized from these investments. Many are coming up short because they are maintaining both applications in parallel. Relief will come when systems can be retired – but getting hospital staff and regulators to approve a retirement project requires evidence that they can still access data while adhering to compliance needs.
Many providers have overcome these hurdles by successfully implementing an application retirement strategy based on the Informatica Data Archive platform. Several of the largest pediatrics’ children’s hospitals in the US are either already saving or expecting to save $2 Million or more annually from retiring legacy applications. The savings come from:
- Eliminating software maintenance and license costs
- Eliminate hardware dependencies and costs
- Reduced storage requirements by 95% (data archived is stored in a highly compressed, accessible format)
- Improved efficiencies in IT by eliminating specialized processes or skills associated with legacy systems
- Freed IT resources – teams can spend more of their time working on innovations and new projects
Informatica Application Retirement Solutions for Healthcare provide hospitals with the ability to completely retire legacy applications, retire and maintain access to archive data for hospital staff. And with built in security and retention management, records managers and legal teams are satisfying compliance requirements. Contact your Informatica Healthcare team for more information on how you can get that EPIC ROI the board of directors is asking for.
During Informatica World in early June, we were excited to announce our new Potential at Work Community. You can read Jakki Geiger’s blog introducing the Community to learn more about the goals for this great resource. (more…)
Science fiction represents some of the most impactful stories I’ve read throughout my life. By impactful, I mean the ideas have stuck with me 30 years since I last read them. I recently recalled two of these stories and realized they represent two very different paths for Big Data. One path, quite literally, was towards enlightenment. Let’s just say the other path went in a different direction. The amazing thing is that both of these stories were written between 50-60 years ago. (more…)
Recently, the Informatica Marketplace reached a major milestone: we exceeded 1,000 Blocks (Apps). Looking back to three years ago when we started with 70 Blocks from a handful of partners, it’s an amazing achievement to have reached this volume of solutions in such a short time. For me, it speaks to the tremendous value that the Marketplace brings not only to our customers who download more than 10,000 Blocks per month, but also to our partners who have found in the Marketplace a viable route to market and a great awareness and monetization vehicle for their solutions.
There has been a lot of discussion around the explosion of data and what it means to companies trying to leverage this extremely valuable resource. Informatica has a huge part to play in helping customers solve those problems not only through the technologies we provide directly, but through the tremendous ecosystem that we have built through our partners. The Marketplace has grown to more than 165 unique partner companies, and we’re adding more every day. Blocks such as BI & Analytics sing Social Media Data from Deloitte, and Interstage XWand – XBRL Processor from Fujitsu represent offerings from large, established software companies, while Blocks such as Skybot Enterprise Job Scheduler and Undraleu Code Review Tool from Coeurdata are solutions that have been contributed by earlier stage companies that have experienced significant success and growth. It has been a pleasure helping these companies to grow and reach new customers through the Marketplace.
One of the most exciting things about reaching the 1K Block milestone is not just the amount of companies that are on the Marketplace, but the amount of solutions that have been contributed from our developer community. Blocks such as Autotype Excel Macro, Execute Workflow, and iExportNormalizer are all solutions that Informatica developers have built because it helps them in their daily activities, and through the Marketplace they have found a way to share these valuable assets with the community. In fact, over half of our solutions are free to use, which is a ringing endorsement of the power of the community and a great way to try out any number of useful solutions at no risk. By leveraging enabling technologies such as Informatica’s Cloud Platform as a Service, developers can create and share solutions more quickly and easily than ever before.
Overall, it has been an exciting ride as the Marketplace has rocketed to 1,000 Blocks in under three years, and I look forward to what the next three years has in store!
There are organizations truly reaping the rewards of Big Data, and then there are those who are just trying to catch up. What are the Big Data “leaders” doing that the “laggards” are missing? (more…)
Following up on the discussion I started on GovernYourData.com (thanks to all who provided great feedback), here’s my full proposal on this topic:
We all know about the “Garbage In/Garbage Out” reality that data quality and data governance practitioners have been fighting against for decades. If you don’t trust data when it’s initially captured, how can you trust it when it’s time to consume or analyze it? But I’m also looking at the tougher problem of data degradation. The data comes into your environment just fine, but any number of actions, events – or inactions – turns that “good” data “bad”.
So far I’ve been able to hypothesize eight root causes of data degradation. I’d really love your feedback on both the validity and completeness of these categories. I’ve used similar examples across a number of these to simplify. (more…)
When seeking to justify an investment in Product Information Management (PIM) and building the business case, companies can investigate which key performance indicators are impacted by PIM. The results of the international PIM study demonstrate that a return on investment (ROI) from an Enterprise Product Information Management (PIM) solution is possible within the framework of a multichannel commerce strategy.
1. Search engines
60% of web users use search engines to search for products. (Source: Searchengineland.com)
The results of the international PIM study show that a return on investment (ROI) from an Enterprise Product Information Management (PIM) solution is possible within the framework of a multichannel commerce strategy. Over 300 retailers and manufacturers from 17 countries participated in the extensive study by Heiler Software. The study delivers more than 30 pages of measurable results. One such result is that PIM leads to a 25% faster time-to-market thanks to SEO products. (Source: www.pim-roi.com)
3. Shopping cart abandonment
90% of shopping cart abandonments occur because of poor product information. (Source: Internet World Business Magazine)
4. Product returns
40 is the critical number. 40% of buyers intend to return a product when they order it. 40% order more variations of a product. 40% of all product returns are due to poor product information. (Source: Magazine Wirtschaftswoche 7.1.2013 and Return Research – average German mail-order market)
5. Print impacts online
Printed catalogs lead to a 30% boost in online sales. (Source: ECC multichannel survey)
6. Cost savings in print catalog publishing
PIM enables a saving of USD 280,000 by automating manual print catalog production. (Source: LNC PIM survey 2007)
7. In-store sales and customer service
61% of retail managers believe that shoppers are better connected to product information than in-store associates. (Source: Motorola Holiday Shopping Study 2012)
8. Margins with niche products
80% of Heiler PIM customers say they sell at higher margins by pursuing a long tail strategy and increasing assortment size. (Source: www.pim-roi.com)
9. Social sharing
Social sharing generates value. And, on average across all social networks, the value of a social share drives $3.23 in additional revenue for an event each time someone shares. (Source: Social commerce numbers, October 23, 2012)