Category Archives: Business/IT Collaboration
Every fall Informatica sales leadership puts together its strategy for the following year. The revenue target is typically a function of the number of sellers, the addressable market size and key accounts in a given territory, average spend and conversion rate given prior years’ experience, etc. This straight forward math has not changed in probably decades, but it assumes that the underlying data are 100% correct. This data includes:
- Number of accounts with a decision-making location in a territory
- Related IT spend and prioritization
- Organizational characteristics like legal ownership, industry code, credit score, annual report figures, etc.
- Key contacts, roles and sentiment
- Prior interaction (campaign response, etc.) and transaction (quotes, orders, payments, products, etc.) history with the firm
Every organization, no matter if it is a life insurer, a pharmaceutical manufacturer, a fashion retailer or a construction company knows this math and plans on getting somewhere above 85% achievement of the resulting target. Office locations, support infrastructure spend, compensation and hiring plans are based on this and communicated.
So why is it that when it is an open secret that the underlying data is far from perfect (accurate, current and useful) and corrupts outcomes, too few believe that fixing it has any revenue impact? After all, we are not projecting the climate for the next hundred years here with a thousand plus variables.
If corporate hierarchies are incorrect, your spend projections based on incorrect territory targets, credit terms and discount strategy will be off. If every client touch point does not have a complete picture of cross-departmental purchases and campaign responses, your customer acquisition cost will be too high as you will contact the wrong prospects with irrelevant offers. If billing, tax or product codes are incorrect, your billing will be off. This is a classic telecommunication example worth millions every month. If your equipment location and configuration is wrong, maintenance schedules will be incorrect and every hour of production interruption will cost an industrial manufacturer of wood pellets or oil millions.
Also, if industry leaders enjoy an upsell ratio of 17%, and you experience 3%, data (assuming you have no formal upsell policy as it violates your independent middleman relationship) data will have a lot to do with it.
The challenge is not the fact that data can create revenue improvements but how much given the other factors: people and process.
Every industry laggard can identify a few FTEs who spend 25% of their time putting one-off data repositories together for some compliance, M&A customer or marketing analytics. Organic revenue growth from net-new or previously unrealized revenue is what the focus of any data management initiative should be. Don’t get me wrong; purposeful recruitment (people), comp plans and training (processes) are important as well. Few people doubt that people and process drives revenue growth. However, few believe data being fed into these processes has an impact.
This is a head scratcher for me. An IT manager at a US upstream oil firm once told me that it would be ludicrous to think data has a revenue impact. They just fixed data because it is important so his consumers would know where all the wells are and which ones made a good profit. Isn’t that assuming data drives production revenue? (Rhetorical question)
A CFO at a smaller retail bank said during a call that his account managers know their clients’ needs and history. There is nothing more good data can add in terms of value. And this happened after twenty other folks at his bank including his own team delivered more than ten use cases, of which three were based on revenue.
Hard cost (materials and FTE) reduction is easy, cost avoidance a leap of faith to a degree but revenue is not any less concrete; otherwise, why not just throw the dice and see how the revenue will look like next year without a central customer database? Let every department have each account executive get their own data, structure it the way they want and put it on paper and make hard copies for distribution to HQ. This is not about paper versus electronic but the inability to reconcile data from many sources on paper, which is a step above electronic.
Have you ever heard of any organization move back to the Fifties and compete today? That would be a fun exercise. Thoughts, suggestions – I would be glad to hear them?
A growing number of Data Scientists believe so.
If you recall the Cholera outbreak of Haiti in 2010 after the tragic earthquake, a joint research team from Karolinska Institute in Sweden and Columbia University in the US analyzed calling data from two million mobile phones on the Digicel Haiti network. This enabled the United Nations and other humanitarian agencies to understand population movements during the relief operations and during the subsequent cholera outbreak. They could allocate resources more efficiently and identify areas at increased risk of new cholera outbreaks.
Mobile phones, widely owned even in the poorest countries in Africa. Cell phones are also a rich source of data irrespective of which region where other reliable sources are sorely lacking. Senegal’s Orange Telecom provided Flowminder, a Swedish non-profit organization, with anonymized voice and text data from 150,000 mobile phones. Using this data, Flowminder drew up detailed maps of typical population movements in the region.
Today, authorities use this information to evaluate the best places to set up treatment centers, check-posts, and issue travel advisories in an attempt to contain the spread of the disease.
The first drawback is that this data is historic. Authorities really need to be able to map movements in real time especially since people’s movements tend to change during an epidemic.
The second drawback is, the scope of data provided by Orange Telecom is limited to a small region of West Africa.
Here is my recommendation to the Centers for Disease Control and Prevention (CDC):
- Increase the area for data collection to the entire region of Western Africa which covers over 2.1 million cell-phone subscribers.
- Collect mobile phone mast activity data to pinpoint where calls to helplines are mostly coming from, draw population heat maps, and population movement. A sharp increase in calls to a helpline is usually an early indicator of an outbreak.
- Overlay this data over censuses data to build up a richer picture.
The most positive impact we can have is to help emergency relief organizations and governments anticipate how a disease is likely to spread. Until now, they had to rely on anecdotal information, on-the-ground surveys, police, and hospital reports.
Are you in Sales Operations, Marketing Operations, Sales Representative/Manager, or Marketing Professional? It’s no secret that if you are, you benefit greatly from the power of performing your own analysis, at your own rapid pace. When you have a hunch, you can easily test it out by visually analyzing data in Tableau without involving IT. When you are faced with tight timeframes in which to gain business insight from data, being able to do it yourself in the time you have available and without technical roadblocks makes all the difference.
Self-service Business Intelligence is powerful! However, we all know it can be even more powerful. When needing to put together an analysis, we know that you spend about 80% of your time putting together data, and then just 20% of your time analyzing data to test out your hunch or gain your business insight. You don’t need to accept this anymore. We want you to know that there is a better way!
We want to allow you to Flip Your Division of Labor and allow you to spend more than 80% of your time analyzing data to test out your hunch or gain your business insight and less than 20% of your time putting together data for your Tableau analysis! That’s right. You like it. No, you love it. No, you are ready to run laps around your chair in sheer joy!! And you should feel this way. You now can spend more time on the higher value activity of gaining business insight from the data, and even find copious time to spend with your family. How’s that?
Project Springbok is a visionary new product designed by Informatica with the goal of making data access and data quality obstacles a thing of the past. Springbok is meant for the Tableau user, a data person would rather spend their time visually exploring information and finding insight than struggling with complex calculations or waiting for IT. Project Springbok allows you to put together your data, rapidly, for subsequent analysis in Tableau. Project Springbok tells you things about your data that even you may not have known. It does it through Intelligent Suggestions that it presents to the User.
Let’s take a quick tour:
- Project Springbok tells you, that you have a date column and that you likely want to obtain the Year and Quarter for your analysis (Fig 1)., And if you so wish, by a single click, voila, you have your corresponding years and even the quarters. And it all happened in mere seconds. A far cry from the 45 minutes it would have taken a fluent user of Excel to do using VLOOKUPS.
VALUE TO A MARKETING CAMPAIGN PROFESSIONAL: Rapidly validate and accurately complete your segmentation list, before you analyze your segments in Tableau. Base your segments on trusted data that did not take you days to validate and enrich.
- Then Project Springbok will tell you that you have two datasets that could be joined on a common key, email for example, in each dataset, and would you like to move forward and join the datasets (Fig 2)? If you agree with Project Springbok’s suggestion, voila, dataset joined in a mere few seconds. Again, a far cry from the 45 minutes it would have taken a fluent user of Excel to do using VLOOKUPS.
VALUE TO A SALES REPRESENTATIVE OR SALES MANAGER: You can now access your Salesforce.com data (Fig 3) and effortlessly combine it with ERP data to understand your true quota attainment. Never miss quota again due to a revenue split, be it territory or otherwise. Best of all, keep your attainment datatset refreshed and even know exactly what datapoint changed when your true attainment changes.
- Then, if you want, Project Springbok will tell you that you have emails in the dataset, which you may or may not have known, but more importantly it will ask you if you wish to determine which emails can actually be mailed to. If you proceed, not only will Springbok check each email for correct structure (Fig 4), but will very soon determine if the email is indeed active, and one you can expect a response from. How long would that have taken you to do?
VALUE TO A TELESALES REPRESENTATIVE OR MARKETING EMAIL CAMPAIGN SPECIALIST : Ever thought you had a great email list and then found out most emails bounced? Now, confidently determine which emails are truly ones will be able to email to, before you send the message. Email prospects who you know are actually at the company and be confident you have their correct email addresses. You can then easily push the dataset into Tableau to analyze the trends in email list health.
And, in case you were wondering, there is no training or install required for Project Springbok. The 80% of your time you used to spend on data preparation is now shrunk considerably, and this is after using only a few of Springbok’s capabilities. One more thing: You can even directly export from Project Springbok into Tableau via the “Export to Tableau TDE” menu item (Fig 5). Project Springbok creates a Tableau TDE file and you just double click on it to open Tableau to test out your hunch or gain your business insight.
Here are some other things you should know, to convince you that you, too, can only spend no more than 20% of you time on putting together data for your subsequent Tableau analysis:
- Springbok Sign-Up is Free
- Springbok automatically finds problems with your data, and lets you fix them with a single click
- Springbok suggests useful ways for you to combine different datasets, and lets you combine them effortlessly
- Springbok suggests useful summarizations of your data, and lets you follow through on the summarizations with a single click
- Springbok allows you to access data from your cloud or on-premise systems with a few clicks, and the automatically keep it refreshed. It will even tell you what data changed from the last time you saw it
- Springbok allows you to collaborate by sharing your prepared data with others
- Springbok easily exports your prepared data directly into Tableau for immediate analysis. You do not have to tell Tableau how to interpret the prepared data
- Springbok requires no training or installation
Go on. Shift your division of labor in the right direction, fast. Sign-Up for Springbok and stop wasting precious time on data preparation. http://bit.ly/TabBlogs
Are you going to be at Dreamforce this week in San Francisco? Interested in seeing Project Springbok working with Tableau in a live demonstration? Visit the Informatica or Tableau booths and see the power of these two solutions working hand-in-hand.Informatica is Booth #N1216 and Booth #9 in the Analytics Zone. Tableau is located in Booth N2112.
Every two years, the typical company doubles the amount of data they store. However, this Data is inherently “dumb.” Acquiring more of it only seems to compound its lack of intellect.
When revitalizing your business, I won’t ask to look at your data – not even a little bit. Instead, we look at the process of how you use the data. What I want to know is this:
How much of your day-to-day operations are driven by your data?
The Case for Smart Data
I recently learned that 7-Eleven Japan has pushed decision-making down to the store level – in fact, to the level of clerks. Store clerks decide what goes on the shelves in their individual 7-Eleven stores. These clerks push incredible inventory turns. Some 70% of the products on the shelves are new to stores each year. As a result, this chain has been the most profitable Japanese retailer for 30 years running.
Instead of just reading the data and making wild guesses on why something works and why something doesn’t, these clerks acquired the skill of looking at the quantitative and the qualitative and connected dots. Data told them what people are talking about, how it’s related to their product and how much weight it carried. You can achieve this as well. To do so, you must introduce a culture that emphasizes discipline around processes. A disciplined process culture uses:
- A template approach to data with common processes, reuse of components, and a single face presented to customers
- Employees who consistently follow standard procedures
If you cannot develop such company-wide consistency, you will not gain benefits of ERP or CRM systems.
Make data available to the masses. Like at 7-Eleven Japan, don’t centralize the data decision-making process. Instead, push it out to the ranks. By putting these cultures and practices into play, businesses can use data to run smarter.
“Raw materials costs are the company’s single largest expense category,” said Steve Jenkins, Global IT Director at Valspar, at MDM Day in London. “Data management technology can help us improve business process efficiency, manage sourcing risk and reduce RFQ cycle times.”
Valspar is a $4 billion global manufacturing company, which produces a portfolio of leading paint and coating brands. At the end of 2013, the 200 year old company celebrated record sales and earnings. They also completed two acquisitions. Valspar now has 10,000 employees operating in 25 countries.
As is the case for many global companies, growth creates complexity. “Valspar has multiple business units with varying purchasing practices. We source raw materials from 1,000s of vendors around the globe,” shared Steve.
“We want to achieve economies of scale in purchasing to control spending,” Steve said as he shared Valspar’s improvement objectives. “We want to build stronger relationships with our preferred vendors. Also, we want to develop internal process efficiencies to realize additional savings.”
Poorly managed vendor and raw materials data was impacting Valspar’s buying power
The Valspar team, who sharply focuses on productivity, had an “Aha” moment. “We realized our buying power was limited by the age and quality of available vendor data and raw materials data,” revealed Steve.
The core vendor data and raw materials data that should have been the same across multiple systems wasn’t. Data was often missing or wrong. This made it difficult to calculate the total spend on raw materials. It was also hard to calculate the total cost of expedited freight of raw materials. So, employees used a manual, time-consuming and error-prone process to consolidate vendor data and raw materials data for reporting.
These data issues were getting in the way of achieving their improvement objectives. Valspar needed a data management solution.
Valspar needed a single trusted source of vendor and raw materials data
The team chose Informatica MDM, master data management (MDM) technology. It will be their enterprise hub for vendors and raw materials. It will manage this data centrally on an ongoing basis. With Informatica MDM, Valspar will have a single trusted source of vendor and raw materials data.
Informatica PowerCenter will access data from multiple source systems. Informatica Data Quality will profile the data before it goes into the hub. Then, after Informatica MDM does it’s magic, PowerCenter will deliver clean, consistent, connected and enriched data to target systems.
Better vendor and raw materials data management results in cost savings
Valspar expects to gain the following business benefits:
- Streamline the RFQ process to accelerate raw materials cost savings
- Reduce the total number of raw materials SKUs and vendors
- Increase productivity of staff focused on pulling and maintaining data
- Leverage consistent global data visibly to:
- increase leverage during contract negotiations
- improve acquisition due diligence reviews
- facilitate process standardization and reporting
Valspar’s vision is to tranform data and information into a trusted organizational assets
“Mastering vendor and raw materials data is Phase 1 of our vision to transform data and information into trusted organizational assets,” shared Steve. In Phase 2 the Valspar team will master customer data so they have immediate access to the total purchases of key global customers. In Phase 3, Valspar’s team will turn their attention to product or finished goods data.
Steve ended his presentation with some advice. “First, include your business counterparts in the process as early as possible. They need to own and drive the business case as well as the approval process. Also, master only the vendor and raw materials attributes required to realize the business benefit.”
Want more? Download the Total Supplier Information Management eBook. It covers:
- Why your fragmented supplier data is holding you back
- The cost of supplier data chaos
- The warning signs you need to be looking for
- How you can achieve Total Supplier Information Management
30% or higher of each company’s businesses are unprofitable
According to Jonathan Brynes at the MIT Sloan School, “the most important issue facing most managers …is making more money from their existing businesses without costly new initiatives”. In Brynes’ cross industry research, he found that 30% or higher of each company’s businesses are unprofitable. Brynes claims these business losses are offset by what are “islands of high profitability”. The root cause of this issue is asserted to be the inability of current financial and management control systems to surface profitability problems and opportunities. Why is this the case? Byrnes believes that management budgetary guidance by its very nature assumes the continuation of the status quo. For this reason, the response to management asking for a revenue increase is to increase revenues for businesses that are profitable and unprofitable. Given this, “the areas of embedded unprofitability remain embedded and largely invisible”. At the same time to be completely fair, it should be recognized that it takes significant labor to accurately and completely put together a complete picture on direct and indirect costs.
The CFO needs to become the point person on profitability issues
Byrnes believes, nevertheless, that CFOs need to become the corporate point person for surfacing profitability issues. They, in fact, should act as the leader of a new and important role, the chief profitability officer. This may seem like an odd suggestion since virtually every CFO if asked would view profitability as a core element of their job. But Byrnes believes that CFOs need to move beyond broad, departmental performance measures and build profitability management processes into their companies’ core management activities. This task requires the CFO to determine two things.
- Which product lines, customers, segments, and channels are unprofitable so investments can be reduced or even eliminated?
- Which product lines, customers, segments, and channels are the most profitable so management can determine whether to expand investments and supporting operations?
Why didn’t portfolio management solve this problem?
Now as a strategy MBA, Byrnes’ suggestion leave me wondering why the analysis proposed by strategy consultants like Boston Consulting Group didn’t solve this problem a long time ago. After all portfolio analysis has at its core the notion that relative market share and growth rate will determine profitability and which businesses a firm should build share, hold share, harvest share, or divest share—i.e. reduce, eliminate, or expand investment. The truth is getting at these figures, especially profitability, is a time consuming effort.
KPMG finds 91% of CFOs are held back by financial and performance systems
As financial and business systems have become more complex, it has become harder and harder to holistically analyze customer and product profitability because the relevant data is spread over a myriad of systems, technologies, and locations. For this reason, 91% of CFO respondents in a recent KPMG survey said that they want to improve the quality of their financial and performance insight from the data they produce. An amazing 51% of these CFOs, also, admitted that the “collection, storage, and retrieval financial and performance data at their company is primarily a manual and/or spreadsheet-based exercise”. Think about it — a majority of these CFOs teams time is spent collecting financial data rather than actively managing corporate profitability.
How do we fix things?
What is needed is a solution that allows financial teams to proactively produce trustworthy financial data from each and every financial system and then reliably combine and aggregate the data coming from multiple financial systems. Having accomplished this, the solution needs to allow financial organizations to slice and dice net profitability for product lines and customers.
This approach would not only allow financial organizations to cut their financial operational costs but more importantly drive better business profitability by surfacing profitability gaps. At the same time, it would enable financial organizations to assist business units in making more informed customer and product line investment decisions. If a product line or business is narrowly profitable and lacks a broader strategic context or ability to increase profitability by growing market share, it is a candidate for investment reduction or elimination.
Strategic CFOs need to start asking questions of their business counterparts starting with their justification for their investment strategy. Key to doing this involves consolidating reliable profitability data across customers, products, channel partners, suppliers. This would eliminate the time spent searching for and manually reconciling data in different formats across multiple systems. It should deliver ready analysis across locations, applications, channels, and departments.
Some parting thoughts
Strategic CFOs tell us they are trying to seize the opportunity “to be a business person versus a bean counting historically oriented CPA”. I believe a key element of this is seizing the opportunity to become the firm’s chief profitability officer. To do this well, CFOs need dependable data that can be sliced and diced by business dimensions. Armed with this information, CFOs can determine the most and least profitability, businesses, product lines, and customers. As well, they can come to the business table with the perspective to help guide their company’s success.
Solution Brief: The Intelligent Data Platform
CFOs Discuss Their Technology Priorities
The CFO Viewpoint upon Data
How CFOs can change the conversation with their CIO?
New type of CFO represents a potent CIO ally
Competing on Analytics
The Business Case for Better Data Connectivity
Last time I talked about how benchmark data can be used in IT and business use cases to illustrate the financial value of data management technologies. This time, let’s look at additional use cases, and at how to philosophically interpret the findings.
So here are some additional areas of investigation for justifying a data quality based data management initiative:
- Compliance or any audits data and report preparation and rebuttal (FTE cost as above)
- Excess insurance premiums on incorrect asset or party information
- Excess tax payments due to incorrect asset configuration or location
- Excess travel or idle time between jobs due to incorrect location information
- Excess equipment downtime (not revenue generating) or MTTR due to incorrect asset profile or misaligned reference data not triggering timely repairs
- Equipment location or ownership data incorrect splitting service cost or revenues incorrectly
- Party relationship data not tied together creating duplicate contacts or less relevant offers and lower response rates
- Lower than industry average cross-sell conversion ratio due to inability to match and link departmental customer records and underlying transactions and expose them to all POS channels
- Lower than industry average customer retention rate due to lack of full client transactional profile across channels or product lines to improve service experience or apply discounts
- Low annual supplier discounts due to incorrect or missing alternate product data or aggregated channel purchase data
I could go on forever, but allow me to touch on a sensitive topic – fines. Fines, or performance penalties by private or government entities, only make sense to bake into your analysis if they happen repeatedly in fairly predictable intervals and are “relatively” small per incidence. They should be treated like M&A activity. Nobody will buy into cost savings in the gazillions if a transaction only happens once every ten years. That’s like building a business case for a lottery win or a life insurance payout with a sample size of a family. Sure, if it happens you just made the case but will it happen…soon?
Use benchmarks and ranges wisely but don’t over-think the exercise either. It will become paralysis by analysis. If you want to make it super-scientific, hire an expensive consulting firm for a 3 month $250,000 to $500,000 engagement and have every staffer spend a few days with them away from their day job to make you feel 10% better about the numbers. Was that worth half a million dollars just in 3rd party cost? You be the judge.
In the end, you are trying to find out and position if a technology will fix a $50,000, $5 million or $50 million problem. You are also trying to gauge where key areas of improvement are in terms of value and correlate the associated cost (higher value normally equals higher cost due to higher complexity) and risk. After all, who wants to stand before a budget committee, prophesy massive savings in one area and then fail because it would have been smarter to start with something simpler and quicker win to build upon?
The secret sauce to avoiding this consulting expense and risk is a natural curiosity, willingness to do the legwork of finding industry benchmark data, knowing what goes into them (process versus data improvement capabilities) to avoid inappropriate extrapolation and using sensitivity analysis to hedge your bets. Moreover, trust an (internal?) expert to indicate wider implications and trade-offs. Most importantly, you have to be a communicator willing to talk to many folks on the business side and have criminal interrogation qualities, not unlike in your run-of-the-mill crime show. Some folks just don’t want to talk, often because they have ulterior motives (protecting their legacy investment or process) or hiding skeletons in the closet (recent bad performance). In this case, find more amenable people to quiz or pry the information out of these tough nuts, if you can.
Lastly; if you find ROI numbers, which appear astronomical at first, remember that leverage is a key factor. If a technical capability touches one application (credit risk scoring engine), one process (quotation), one type of transaction (talent management self-service), a limited set of people (procurement), the ROI will be lower than a technology touching multiple of each of the aforementioned. If your business model drives thousands of high-value (thousands of dollars) transactions versus ten twenty-million dollar ones or twenty-million one-dollar ones, your ROI will be higher. After all, consider this; retail e-mail marketing campaigns average an ROI of 578% (softwareprojects.com) and this with really bad data. Imagine what improved data can do just on that front.
I found massive differences between what improved asset data can deliver in a petrochemical or utility company versus product data in a fashion retailer or customer (loyalty) data in a hospitality chain. The assertion of cum hoc ergo propter hoc is a key assumption how technology delivers financial value. As long as the business folks agree or can fence in the relationship, you are on the right path.
What’s your best and worst job to justify someone giving you money to invest? Share that story.
According to the Financial Executives Institute, CFOs say their second highest priority this year is to harness business intelligence and big data. Their highest priority is to improve cash flow and working capital efficiency and effectiveness. This means CFOs highest two priorities are centered around data. At roughly the same time, KPMG has found in their survey of CFOs that 91% want to improve the quality of their financial and performance insight obtained from the data that they produce. Even more amazing 51% of CFO admitted that “collecting, storing, and retrieving financial and performance data at their company is primarily accomplished through a manual and/or spreadsheet-based exercise”. From our interviews of CFOs, we believe this number is much higher.
Your question at this point—if you are not a CFO—should be how can this be the case? After all strategy consultants like Booz and Company, actively measure the degree of digitization and automation taking place in businesses by industry and these numbers year after year have shown a strong upward bias. How can the finance organization be digitized for data collection but still largely manual in its processes for putting together the figures that management and the market needs?
CFOs do not trust their data
In our interviews of CFOs, one CFO answered this question bluntly by saying “If the systems suck, then you cannot trust the numbers when you get them.” And this reality truly limits CFOs in how they respond to their top priorities. Things like management of the P&L, Expense Management, Compliance, and Regulatory all are impacted by the CFOs data problem. Instead of doing a better job at these issues, CFOs and their teams remain largely focused on “getting the numbers right”. And even worse, the answering of business questions like how much revenue is this customer providing or how profitable this customer is, involves manual pulls of data today from more than one system. And yes, similar data issues exist in financial services organizations which close the books nightly.
The CFOs, that I have talked to, admit without hesitation that data is a big issue for them. These CFOs say that they worry about data from the source and the ability to do meaningful financial or managerial analysis. They say they need to rely on data in order to report but as important they need it to help drive synergies across businesses. This matters because CFOs say they want to move from being just “bean counters” to being participants in the strategy of their enterprises.
To succeed, CFOs say that they need timely, accurate data. However, they are the first to discuss how disparate systems get in their way. CFOs believe that making their lives easier starts with the systems that support them. What they believe is needed is real integration and consolidation of data. One CFO said what is needed this way, “we need the integration of the right systems to provide the right information so we can manage and make decisions at the right time”. CFOs clearly want to know that the accounting systems are working and reliable. At the same time, CFOs want, for example, a holistic view of customer. When asked why this isn’t a marketing activity, they say this is business issue that CFOs need to help manage. “We want to understand the customer across business units. It is a finance objective because finance is responsible for business metrics and there are gaps in business metrics around customer. How much cross sell opportunities is the business as a whole pursuing?”
Chief Profitability Officers?
Jonathan Brynes at the MIT Sloan School confirms this viewpoint is becoming a larger trend when he suggests that CFOs need to take on the function of “Chief Profitability Officers”. With this hat, CFOs, in his view, need to determine which product lines, customers, segments, and channels are the most and the least profitable. Once again, this requires that CFOs tackle their data problem to have relevant, holistic information.
CIOs remain responsible for data delivery
CFOs believe that CIOs remain responsible for how data is delivered. CFOs, say that they need to lead in creating validated data and reports. Clearly, if data delivery remains a manual process, then the CFO will be severely limited in their ability to adequately support their new and strategic charter. Yet CFOs when asked if they see data as a competitive advantage say that “every CFO would view data done well as a competitive advantage”. Some CFOs even suggest that data is the last competitive advantage. This fits really well with the view of Davenport in “Competing on Analytics”. The question is how soon will CIOs and CFOs work together to get the finance organization out of its mess of manually massaging and consolidating financial and business data.
Solution Brief: The Intelligent Data Platform
Earlier this month, CNBC.com published its first ever R&D All-Stars: CNBC RQ 50, ranking the top 50 public companies by return on research and development investment. Coming in the top ten, and the first pure software play was Informatica, mentioned as first among great software companies like Google, Amazon, and Salesforce. CNBC.com is referencing a companion article by David Spiegel – Boring stocks that generate R&D heat-and profits. The article made an excellent point: When R&D productivity links R&D spending to corporate revenue growth and market value, it is a better gauge of the productivity of that spending.
Unlike other R&D lists or rankings, the RQ50 was less concerned with pure dollars than what the company actually did with it. The RQ50 measures increase in revenue as it relates to increase in R&D expenditures. Its methodology was provided by Professor Anne Marie Knott, of Washington University in St. Louis, who tracks and studies corporate R&D investment, and has found that the companies that regularly turn R&D into income typically place innovation at the forefront of the corporate mission and have a structure and culture that support it.
Informatica is on the list because its revenue gains between 2006 and 2013 correlate directly with its increased R&D investment over the same period. While the list specifically cites the 2013 figures, the result is due to a systematic and long-term strategic initiative to place innovation at the core of our business plan.
Informatica has innovated broadly across its product spectrum. I can personally speak to one area where it has invested smartly and made significant gains – Informatica Cloud. Informatica decided to make its initial investment in the cloud in 2006 and was early in the market with regards to cloud integration. In fact, back in 2006, very few of today’s well-known SaaS companies were even publicly traded. The most popular SaaS app today, Salesforce.com had revenues of just $309 million in FY2006 compared with over $4 billion in FY2014. Amazon EC2, one of the core services of Amazon Web Services (AWS) itself had only been announced in that year. Apart from EC2, Amazon only had six other services in 2006. In 2014, that number has ballooned to over 30.
In his article about the RQ50, Spiegel talks about how the companies on the list aren’t just listening to what customers want or need now. They’re also challenging themselves to come up with the things the market can use two or ten years into the future. In 2006, Informatica took the same approach with its initial investment in cloud integration.
For us, it started with an observation and then a commitment to the belief that we were at an inflection point with the cloud, and on the cusp of what was going to become a true megatrend that represented a huge opportunity for the integration industry. Informatica assembled a small, agile group made up of strong leaders with varying skills and experience pulled from different areas—sales, engineering, and product management — throughout the company. It also meant throwing away the traditional measures of success and identifying new and more appropriate metrics to benchmark our progress. And finally, it included partnering with like-minded companies like Salesforce and NetSuite initially, and later on with Amazon, and taking our core strength – on-premise data integration technology – and pivoting it into a new direction.
The result was the first iteration of the Informatica Cloud. It leveraged the fruit of our R&D investment – the Vibe Virtual Data Machine – to provide SaaS administrators and line of business IT with the ability to perform lightweight cloud integrations between their on-premise and cloud applications without the involvement of an integration developer. Subsequent work and innovation have continued along the same path, adding tools like drag-and-drop design interfaces and mapping wizards, with the end goal of giving line-of-business (LOB) IT, cloud application administrators and citizen integrators a single platform to perform all the integration patterns they require, on their timeline. Informatica Cloud has consistently delivered 2-3 releases every year, and is now already on Release 20. From originally starting out with Data Replication for Salesforce, the Cloud team added bigger and better functionality such as developing connectivity for over 100 applications and data protocols, opening up our integration services through REST APIs, going beyond integration by incorporating cloud master data management and cloud test data management capabilities, and most recently announcing optimized batch and real-time cloud integration under a single unified platform.
And it goes on to this day, with investments in new innovations and directions, like Informatica Project Springbok. With Project Springbok, we’re duplicating what we did with Informatica Cloud but this time for citizen integrators. We’re using our vast experiences working with customers and building cutting-edge technology IP over the last 20 years and enabling citizen integrators to harmonize data faster for better insights (and hopefully, less late nights writing spreadsheet formulas). What we do after Project Springbok is anyone’s guess, but wherever that is, it will be sure to put us on lists like the RQ 50 for some time to come.