Category Archives: Data Governance

IT Is All About Data!

AboutDataIs this how you think about IT? Or do you think of IT in terms of the technology it deploys instead? I recently was interviewing a CIO at Fortune 50 Company about the changing role of CIOs. When I asked him about which technology issues were most important, this CIO said something that surprised me.

He said, “IT is all about data. Think about it. What we do in IT is all about the intake of data, the processing of data, the store of data, and the analyzing of data. And we need, from data, to increasingly provide the intelligence to make better decisions”.

How many view the function of the IT organization with such clarity?

This was the question that I had after hearing this CIO. And how many IT organizations view IT as really a data system? It must not be very many. Jeanne Ross from MIT CISR contends in her book that company data “one of its most important assets, is patchy, error-prone, and not up-to-date” (Enterprise Architecture as Strategy, Jeanne Ross, page 7). Jeanne contends as well that companies having a data centric view “have higher profitability, experience faster time to market, and get more value from their IT investments” (Enterprise Architecture as Strategy, Jeanne Ross, page 2).

What then do you need to do to get your data house in order?

What then should IT organizations do to move their data from something that is “patchy, error-prone, and not up-to-date” to something that is trustworthy and timely? I would contend our CIO friend had it right. We need to manage all four elements of our data process better.

1.     Input Data Correctly

data inputYou need start by making sure that the data you produced is done so consistently and correctly.  I liken the need here to a problem that I had with my electronic bill pay a few years ago. My bank when it changed bill payment service providers started sending my payments to a set of out of date payee addresses. This caused me to receive late fees and for my credit score to actually go down. The same kind of thing can happen to a business when there are duplicate customers or customer addresses are entered incorrectly. So much of marketing today is about increasing customer intimacy. It is hard to improve customer intimacy when you bug the same customer too much or never connect with a customer because you had a bad address for them.

2.     Process Data to Produce Meaningful Results

processing dataYou need to collect and manipulate data to derive meaningful information. This is largely about processing data so it results produce meaningful analysis. To do this well, you need to take out the data quality issues from the data that is produced. We want, in this step, to make data is “trustworthy” to business users.

With this, data can be consolidated into a single view of customer, financial account, etc. A CFO explained the importance of this step by saying the following:

“We often have redundancies in each system and within the chart of accounts the names and numbers can differ from system to system. And as you establish a bigger and bigger set of systems, you need to, in accounting parlance, to roll-up the charter of accounts”.

Once data is consistently put together, then you need to consolidate it so that it can be used by business users. This means that aggregates need to be created for business analysis. These should support dimensional analysis so that business users can truly answer why something happened. For finance organizations, timely aggregated data with supporting dimensional analysis enables them to establish themselves as “a business person versus a bean counting historically oriented CPA”. Having this data answers questions like the following:

  • Why are sales not being achieved? Which regions or products are failing to be delivered?
  • Or why is the projected income statement not in conformance with plan? Which expense categories should be we cut in order to ensure the income statement is in line with business expectation?

3.     Store Data Where it is Most Appropriate

Cloud StorageData storage needs to be able to occur today in many ways. It can be in applications, a data warehouse, or even, a Hadoop cluster. You need here to have an overriding data architecture that considers the entire lifecycle of data. A key element of doing this well involves archiving data as it becomes inactive and protecting data across its entire lifecycle. The former can involve as well the disposing of information. And the latter requires the ability to audit, block, and dynamically mask sensitive production data to prevent unauthorized access.

4.      Enable analysis including the discovery, testing, and putting of data together

Data AnalysisAnalysis today is not just about the analysis tools. It is about enabling users to discover, test, and put data together. CFOs that we have talked to say they want analysis to expose earlier potential business problems. They want, for example, to know about metrics like average selling price and gross margins by account or by product. They want as well to see when they have seasonality affects.

Increasingly, CFOs need to use this information to help predict what the future of their business will look like. CFOs say that they want at the same time to help their businesses make better decisions from data. Limiting them today from doing this are disparate systems that cannot talk to each other. CFOs complain about their enterprise hodgepodge of systems that do not talk to one another. And yet to report, CFOs need to traverse between front office to back office systems.

One CIO said to us that the end of any analysis layer should be the ability to trust data and make dependable business decisions. And once dependable data exists, business users say that they want “access to data when they need it. They want to get data when and where you need it”. One CIO likened what is need here to orchestration when he said:

“Users want to be able to self-service. They want to be able to assembly data and put it together and do it from different sources at different times. I want them to be able to have no preconceived process. I want them to be able discover data across all sources”.

Parting thoughts

So as we said at the beginning of this post, IT is all about the data. And with mobile systems of engagement, IT’s customers are wanting increasingly their data at their fingertips.  This means that business users need to be able to trust that the data they use for business analysis is timely and accurate. This demands that IT organizations get better at managing their core function—data.

Solution Brief: The Intelligent Data Platform
Great Data by Design
The Power of a Data Centric Mindset
Data is it your competitive advantage?

Related Blogs

Competing on Analytics
The CFO Viewpoint of Data
Is Big Data Destined To Become Small And Vertical?
Big Data Why?
The Business Case for Better Data Connectivity
What is big data and why should your business care?
Twitter: @MylesSuer

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance | Tagged | Leave a comment

Ebola: Why Big Data Matters

Ebola: Why Big Data Matters

Ebola: Why Big Data Matters

The Ebola virus outbreak in West Africa has now claimed more than 4,000 lives and has entered the borders of the United States. While emergency response teams, hospitals, charities, and non-governmental organizations struggle to contain the virus, could big data analytics help?

A growing number of Data Scientists believe so.

If you recall the Cholera outbreak of Haiti in 2010 after the tragic earthquake, a joint research team from Karolinska Institute in Sweden and Columbia University in the US analyzed calling data from two million mobile phones on the Digicel Haiti network. This enabled the United Nations and other humanitarian agencies to understand population movements during the relief operations and during the subsequent cholera outbreak. They could allocate resources more efficiently and identify areas at increased risk of new cholera outbreaks.

Mobile phones, widely owned even in the poorest countries in Africa. Cell phones are also a rich source of data irrespective of which region where other reliable sources are sorely lacking. Senegal’s Orange Telecom provided Flowminder, a Swedish non-profit organization, with anonymized voice and text data from 150,000 mobile phones. Using this data, Flowminder drew up detailed maps of typical population movements in the region.

Today, authorities use this information to evaluate the best places to set up treatment centers, check-posts, and issue travel advisories in an attempt to contain the spread of the disease.

The first drawback is that this data is historic. Authorities really need to be able to map movements in real time especially since people’s movements tend to change during an epidemic.

The second drawback is, the scope of data provided by Orange Telecom is limited to a small region of West Africa.

Here is my recommendation to the Centers for Disease Control and Prevention (CDC):

  1. Increase the area for data collection to the entire region of Western Africa which covers over 2.1 million cell-phone subscribers.
  2. Collect mobile phone mast activity data to pinpoint where calls to helplines are mostly coming from, draw population heat maps, and population movement. A sharp increase in calls to a helpline is usually an early indicator of an outbreak.
  3. Overlay this data over censuses data to build up a richer picture.

The most positive impact we can have is to help emergency relief organizations and governments anticipate how a disease is likely to spread. Until now, they had to rely on anecdotal information, on-the-ground surveys, police, and hospital reports.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B Data Exchange, Big Data, Business Impact / Benefits, Business/IT Collaboration, Data Governance, Data Integration | Tagged , , , , , , , | Leave a comment

BCBS 239 – What Are Banks Talking About?

I recently participated on an EDM Council panel on BCBS 239 earlier this month in London and New York. The panel consisted of Chief Risk Officers, Chief Data Officers, and information management experts from the financial industry. BCBS 239 set out 14 key principles requiring banks aggregate their risk data to allow banking regulators to avoid another 2008 crisis, with a deadline of Jan 1, 2016.  Earlier this year, the Basel Committee on Banking Supervision released the findings from a self-assessment from the Globally Systemically Important Banks (GISB’s) in their readiness to 11 out of the 14 principles related to BCBS 239. 

Given all of the investments made by the banking industry to improve data management and governance practices to improve ongoing risk measurement and management, I was expecting to hear signs of significant process. Unfortunately, there is still much work to be done to satisfy BCBS 239 as evidenced from my findings. Here is what we discussed in London and New York.

  • It was clear that the “Data Agenda” has shifted quite considerably from IT to the Business as evidenced by the number of risk, compliance, and data governance executives in the room.  Though it’s a good sign that business is taking more ownership of data requirements, there was limited discussions on the importance of having capable data management technology, infrastructure, and architecture to support a successful data governance practice. Specifically capable data integration, data quality and validation, master and reference data management, metadata to support data lineage and transparency, and business glossary and data ontology solutions to govern the terms and definitions of required data across the enterprise.
  • With regard to accessing, aggregating, and streamlining the delivery of risk data from disparate systems across the enterprise and simplifying the complexity that exists today from point to point integrations accessing the same data from the same systems over and over again creating points of failure and increasing the maintenance costs of supporting the current state.  The idea of replacing those point to point integrations via a centralized, scalable, and flexible data hub approach was clearly recognized as a need however, difficult to envision given the enormous work to modernize the current state.
  • Data accuracy and integrity continues to be a concern to generate accurate and reliable risk data to meet normal and stress/crisis reporting accuracy requirements. Many in the room acknowledged heavy reliance on manual methods implemented over the years and investing in Automating data integration and onboarding risk data from disparate systems across the enterprise is important as part of Principle 3 however, much of what’s in place today was built as one off projects against the same systems accessing the same data delivering it to hundreds if not thousands of downstream applications in an inconsistent and costly way.
  • Data transparency and auditability was a popular conversation point in the room as the need to provide comprehensive data lineage reports to help explain how data is captured, from where, how it’s transformed, and used remains a concern despite advancements in technical metadata solutions that are not integrated with their existing risk management data infrastructure
  • Lastly, big concerns regarding the ability to capture and aggregate all material risk data across the banking group to deliver data by business line, legal entity, asset type, industry, region and other groupings, to support identifying and reporting risk exposures, concentrations and emerging risks.  This master and reference data challenge unfortunately cannot be solved by external data utility providers due to the fact the banks have legal entity, client, counterparty, and securities instrument data residing in existing systems that require the ability to cross reference any external identifier for consistent reporting and risk measurement.

To sum it up, most banks admit they have a lot of work to do. Specifically, they must work to address gaps across their data governance and technology infrastructure.BCBS 239 is the latest and biggest data challenge facing the banking industry and not just for the GSIB’s but also for the next level down as mid-size firms will also be required to provide similar transparency to regional regulators who are adopting BCBS 239 as a framework for their local markets.  BCBS 239 is not just a deadline but the principles set forth are a key requirement for banks to ensure they have the right data to manage risk and ensure transparency to industry regulators to monitor system risk across the global markets. How ready are you?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Data Aggregation, Data Governance, Data Services | Tagged , , , | Leave a comment

Harrods: Product Information at the Heart of Customer Experience

Did you know Harrods introduces more than 1.7 million new products every year? This includes their own labels, as well as other brands. Recently, Peter Rush, the Harrods Solution Architect responsible for product information, spoke at Informatica’s MDM Day EMEA in London. At the event, he said there are:

“so many things we want to do: Product Information is at the heart of most of them.”

As part of the customer experience program, Harrods identified product information quality as a key asset, next to customer information management.

The Product Information Challenge Harrods was facing included the following:

  • A Lack of a single Product data store
  • Inappropriate Product Data objectives
  • Massive scale and volume of products and brands (1.7 million new products per year)
  • Concessions and Own Bought
  • Localized enrichment
  • Media Assets all over the estate

While discussing his product information management project, Peter gave a great and simple example. He showed the product descriptions below and asked, “Who knows which two products these are?”:

  1. XX 6621/74 BLK VN SS TOP 969B S
  2. XX37066 L/BLU PRK FLAN SH 440B MED

Then, he solved the mystery. The answer was this:

  1. Black V-neck sleeveless top
  2. Light blue parker print flannel shirt

Turning vision into reality needs a joint business and IT project

Peter said, it is important to build a “flexible team to meet needs of each project stage, with representation from key business areas”. The team should include representatives from groups like: Merchandise Data, Buying Team, Web Team, IT, CRM and the Shopfloor Team. In addition to their Core Project Team, Harrods defined a Steering Committee and a group of selected Super Users.

Benefit summary: a combination of people, technology and process

At the end of the session, I was impressed by this graphic. This image sums up the essentials of product information management success. It is about the people, who are able to do the right things. It is about how technology enables automation. It is about the process which turns information into value.
Finally it is important to mention our partner Javelin Group is leading the PIM implementation at Harrods. Also Andy Hayler, analyst from The Information Difference, wrote an article for the CIO Magazine.

Harrods: Product Information at the Heart of Customer Experience

Harrods: Product Information at the Heart of Customer Experience

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration Platform, Product Information Management | Tagged , , | Leave a comment

Gambling With Your Customer’s Financial Data

CIOs and CFOs both dig data security

Financial dataIn my discussions with CIOs over the last couple of months, I asked them about the importance of a series of topics. All of them placed data security at the top of their IT priority list. Even their CFO counterparts, with whom they do not always see eye to eye, said they were very concerned about the business risk for corporate data. These CFOs said that they touch, as a part of owning business risk, security — especially from hacking. One CFO said that he worried, as well, about the impact of data security for compliance issues, including HIPAA and SOX. Another said this: “The security of data is becoming more and more important. The auditors are going after this. CFOs, for this reason, are really worried about getting hacked. This is a whole new direction, but some of the highly publicized recent hacks have scared a lot of folks and they combined represent to many of us a watershed event.”

Editor of CFO Magazine

According to David W. Owens the editor of CFO Magazine, even if you are using “secure” storage, such as internal drives and private clouds, the access to these areas can be anything but secure. Practically any employee can be carrying around sensitive financial and performance data in his or her pocket, at any time.” Obviously, new forms of data access have created new forms of data risk.

Are some retailers really leaving the keys in the ignition?

If I only hadGiven the like mind set from CIOs and CFOs, I was shocked to learn that some of the recently hacked retailers had been using outdated security software, which may have given hackers easier access company payment data systems. Most amazingly, some retailers had not even encrypted their customer payment data. Because of this, hackers were able to hide on the network for months and steal payment data, as customers continued to use their credit cards at the company’s point of sale locations.

Why weren’t these transactions encrypted or masked? In my 1998 financial information start-up, we encrypted our databases to protect against hacks of our customers’ personal financial data. One answer came from a discussion with a Fortune 100 Insurance CIO. This CIO said “CIO’s/CTO’s/CISO’s struggle with selling the value of these investment because the C Suite is only interested in hearing about investments with a direct impact on business outcomes and benefits”.

Enterprise security drives enterprise brand today

Brand ValueSo how should leaders better argue the business case for security investments? I want to suggest that the value of IT is its “brand promise”. For retailers, in particular, if a past purchase decision creates a perceived personal data security risk, IT becomes a liability to their corporations brand equity and potentially creates a negative impact on future sales. Increasingly how these factors are managed either supports or not the value of a company’s brand.

My message is this: Spend whatever it takes to protect your brand equity; Otherwise a security issue will become a revenue issue.

In sum, this means organizations that want to differentiate themselves and avoid becoming a brand liability need to further invest in their data centric security strategy and of course, encryption. The game is no longer just about securing particular applications. IT organizations need to take a data centric approach to securing customer data and other types of enterprise data. Enterprise level data governance rules needs to be a requirement. A data centric approach can mitigate business risk by helping organizations to understand where sensitive data is and to protect it in motion and at rest. 

Related links

Solutions: Enterprise Level Data Security
The State of Data Centric Security
How Is The CIO Role Starting To Change?
The CFO viewpoint on data
CFOs discuss their technology priorities
Twitter: @MylesSuer

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Data Governance, Data masking, Data Security, Retail | Tagged , , , , , , , | Leave a comment

Are You Looking For A New Information Governance Framework?

A few months ago, while addressing a room full of IT and business professional at an Information Governance conference, a CFO said – “… if we designed our systems today from scratch, they will look nothing like the environment we own.” He went on to elaborate that they arrived there by layering thousands of good and valid decisions on top of one another.

Similarly, Information Governance has also evolved out of the good work that was done by those who preceded us. These items evolve into something that only a few can envision today. Along the way, technology evolved and changed the way we interact with data to manage our daily tasks. What started as good engineering practices for mainframes gave way to data management.

Then, with technological advances, we encountered new problems, introduced new tasks and disciplines, and created Information Governance in the process. We were standing on the shoulders of data management, armed with new solutions to new problems. Now we face the four Vs of big data and each of those new data system characteristics have introduced a new set of challenges driving the need for Big Data Information Governance as a response to changing velocity, volume, veracity, and variety.

Information GovernanceDo you think we need a different framework?

Before I answer this question, I must ask you “How comprehensive is the framework you are using today and how well does it scale to address the new challenges?

While there are several frameworks out in the marketplace to choose from. In this blog, I will tell you what questions you need to ask yourself before replacing your old framework with a new one:

Q. Is it nimble?

The focus of data governance practices must allow for nimble responses to changes in technology, customer needs, and internal processes. The organization must be able to respond to emergent technology.

Q. Will it enable you to apply policies and regulations to data brought into the organization by a person or process?

  • Public company: Meet the obligation to protect the investment of the shareholders and manage risk while creating value.
  • Private company: Meet privacy laws even if financial regulations are not applicable.
  • Fulfill the obligations of external regulations from international, national, regional, and local governments.

Q. How does it Manage quality?

For big data, the data must be fit for purpose; context might need to be hypothesized for evaluation. Quality does not imply cleansing activities, which might mask the results.

Q. Does it understanding your complete business and information flow?

Attribution and lineage are very important in big data. Knowing what is the source and what is the destination is crucial in validating analytics results as fit for purpose.

Q. How does it understanding the language that you use, and can the framework manage it actively to reduce ambiguity, redundancy, and inconsistency?

Big data might not have a logical data model, so any structured data should be mapped to the enterprise model. Big data still has context and thus modeling becomes increasingly important to creating knowledge and understanding. The definitions evolve over time and the enterprise must plan to manage the shifting meaning.

Q. Does it manage classification?

It is critical for the business/steward to classify the overall source and the contents within as soon as it is brought in by its owner to support of information lifecycle management, access control, and regulatory compliance.

Q. How does it protect data quality and access?

Your information protection must not be compromised for the sake of expediency, convenience, or deadlines. Protect not just what you bring in, but what you join/link it to, and what you derive. Your customers will fault you for failing to protect them from malicious links. The enterprise must formulate the strategy to deal with more data, longer retention periods, more data subject to experimentation, and less process around it, all while trying to derive more value over longer periods.

Q. Does it foster stewardship?

Ensuring the appropriate use and reuse of data requires the action of an employee. E.g., this role cannot be automated, and it requires the active involvement of a member of the business organization to serve as the steward over the data element or source.

Q. Does it manage long-term requirements?

Policies and standards are the mechanism by which management communicates their long-range business requirements. They are essential to an effective governance program.

Q. How does it manage feedback?

As a companion to policies and standards, an escalation and exception process enables communication throughout the organization when policies and standards conflict with new business requirements. It forms the core process to drive improvements to the policy and standard documents.

Q. Does it Foster innovation?

Governance must not squelch innovation. Governance can and should make accommodations for new ideas and growth. This is managed through management of the infrastructure environments as part of the architecture.

Q. How does it control third-party content?

Third-party data plays an expanding role in big data. There are three types and governance controls must be adequate for the circumstances. They must consider applicable regulations for the operating geographic regions; therefore, you must understand and manage those obligations.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Governance, Master Data Management | Tagged , , , , , | Leave a comment

Do We Really Need Another Information Framework?

Do We Really Need Another Information Framework?

The EIM Consortium is a group of nine companies that formed this year with the mission to:

Promote the adoption of Enterprise Information Management as a business function by establishing an open industry reference architecture in order to protect and optimize the business value derived from data assets.”

That sounds nice, but we do really need another framework for EIM or Data Governance? Yes we do, and here’s why. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Data Governance, Data Integration, Enterprise Data Management, Governance, Risk and Compliance, Integration Competency Centers, Uncategorized | Tagged , , | Leave a comment

Keeping Information Governance Relevant

Gartner’s official definition of Information Governance is “…the specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards, and metrics that ensure the effective and efficient use of information in enabling a business to achieve its goals.” It therefore looks to address important considerations that key stakeholders within an enterprise face.

A CIO of a large European bank once asked me – “How long do we need to keep information?”

Keeping Information Governance relevant

Data-Cartoon-2This bank had to govern, index, search, and provide content to auditors to show it is managing data appropriately to meet Dodd-Frank regulation. In the past, this information was retrieved from a database or email. Now, however, the bank was required to produce voice recordings from phone conversations with customers, show the Reuters feeds coming in that are relevant, and document all appropriate IMs and social media interactions between employees.

All these were systems the business had never considered before. These environments continued to capture and create data and with it complex challenges. These islands of information that seemingly do not have anything to do with each other, yet impact how that bank governs itself and how it saves any of the records associated with trading or financial information.

Coping with the sheer growth is one issue; what to keep and what to delete is another. There is also the issue of what to do with all the data once you have it. The data is potentially a gold mine for the business, but most businesses just store it and forget about it.

Legislation, in tandem, is becoming more rigorous and there are potentially thousands of pieces of regulation relevant to multinational companies. Businesses operating in the EU, in particular, are affected by increasing regulation. There are a number of different regulations, including Solvency II, Dodd-Frank, HIPAA, Gramm-Leach-Bliley Act (GLBA), Basel III and new tax laws. In addition, companies face the expansion of state-regulated privacy initiatives and new rules relating to disaster recovery, transportation security, value chain transparency, consumer privacy, money laundering, and information security.

Regardless, an enterprise should consider the following 3 core elements before developing and implementing a policy framework.

Whatever your size or type of business, there are several key processes you must undertake in order to create an effective information governance program. As a Business Transformation Architect, I can see 3 foundation stones of an effective Information Governance Program:

Assess Your Business Maturity

Understand the full scope of requirements on your business is a heavy task. Assess whether your business is mature enough to embrace information governance. Many businesses in EMEA do not have an information governance team already in place, but instead have key stakeholders with responsibility for information assets spread across their legal, security, and IT teams.

Undertake a Regulatory Compliance Review

Understand the legal obligations to your business are critical in shaping an information governance program. Every business is subject to numerous compliance regimes managed by multiple regulatory agencies, which can differ across markets. Many compliance requirements are dependent upon the numbers of employees and/or turnover reaching certain limits. For example, certain records may need to be stored for 6 years in Poland, yet the same records may need to be stored for 3 years in France.

Establish an Information Governance Team

It is important that a core team be assigned responsibility for the implementation and success of the information governance program. This steering group and a nominated information governance lead can then drive forward operational and practical issues, including; Agreeing and developing a work program, Developing policy and strategy, and Communication and awareness planning.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Financial Services, Governance, Risk and Compliance | Tagged , , , , , | 1 Comment

CFO Move to Chief Profitability Officer

30% or higher of each company’s businesses are unprofitable

cfoAccording to Jonathan Brynes at the MIT Sloan School, “the most important issue facing most managers …is making more money from their existing businesses without costly new initiatives”. In Brynes’ cross industry research, he found that 30% or higher of each company’s businesses are unprofitable. Brynes claims these business losses are offset by what are “islands of high profitability”. The root cause of this issue is asserted to be the inability of current financial and management control systems to surface profitability problems and opportunities. Why is this the case? Byrnes believes that management budgetary guidance by its very nature assumes the continuation of the status quo. For this reason, the response to management asking for a revenue increase is to increase revenues for businesses that are profitable and unprofitable. Given this, “the areas of embedded unprofitability remain embedded and largely invisible”. At the same time to be completely fair, it should be recognized that it takes significant labor to accurately and completely put together a complete picture on direct and indirect costs.

The CFO needs to become the point person on profitability issues

cfo

Byrnes believes, nevertheless, that CFOs need to become the corporate point person for surfacing profitability issues. They, in fact, should act as the leader of a new and important role, the chief profitability officer. This may seem like an odd suggestion since virtually every CFO if asked would view profitability as a core element of their job. But Byrnes believes that CFOs need to move beyond broad, departmental performance measures and build profitability management processes into their companies’ core management activities. This task requires the CFO to determine two things.

  1. Which product lines, customers, segments, and channels are unprofitable so investments can be reduced or even eliminated?
  2. Which product lines, customers, segments, and channels are the most profitable so management can determine whether to expand investments and supporting operations?

Why didn’t portfolio management solve this problem?

cfoNow as a strategy MBA, Byrnes’ suggestion leave me wondering why the analysis proposed by strategy consultants like Boston Consulting Group didn’t solve this problem a long time ago. After all portfolio analysis has at its core the notion that relative market share and growth rate will determine profitability and which businesses a firm should build share, hold share, harvest share, or divest share—i.e. reduce, eliminate, or expand investment. The truth is getting at these figures, especially profitability, is a time consuming effort.

KPMG finds 91% of CFOs are held back by financial and performance systems

KPMG

As financial and business systems have become more complex, it has become harder and harder to holistically analyze customer and product profitability because the relevant data is spread over a myriad of systems, technologies, and locations. For this reason, 91% of CFO respondents in a recent KPMG survey said that they want to improve the quality of their financial and performance insight from the data they produce. An amazing 51% of these CFOs, also, admitted that the “collection, storage, and retrieval financial and performance data at their company is primarily a manual and/or spreadsheet-based exercise”. Think about it — a majority of these CFOs teams time is spent collecting financial data rather than actively managing corporate profitability.

How do we fix things?

FixWhat is needed is a solution that allows financial teams to proactively produce trustworthy financial data from each and every financial system and then reliably combine and aggregate the data coming from multiple financial systems. Having accomplished this, the solution needs to allow financial organizations to slice and dice net profitability for product lines and customers.

This approach would not only allow financial organizations to cut their financial operational costs but more importantly drive better business profitability by surfacing profitability gaps. At the same time, it would enable financial organizations to assist business units in making more informed customer and product line investment decisions. If a product line or business is narrowly profitable and lacks a broader strategic context or ability to increase profitability by growing market share, it is a candidate for investment reduction or elimination.

Strategic CFOs need to start asking questions of their business counterparts starting with their justification for their investment strategy. Key to doing this involves consolidating reliable profitability data across customers, products, channel partners, suppliers. This would eliminate the time spent searching for and manually reconciling data in different formats across multiple systems. It should deliver ready analysis across locations, applications, channels, and departments.

Some parting thoughts

Strategic CFOs tell us they are trying to seize the opportunity “to be a business person versus a bean counting historically oriented CPA”. I believe a key element of this is seizing the opportunity to become the firm’s chief profitability officer. To do this well, CFOs need dependable data that can be sliced and diced by business dimensions. Armed with this information, CFOs can determine the most and least profitability, businesses, product lines, and customers. As well, they can come to the business table with the perspective to help guide their company’s success.

Related links
Solution Brief: The Intelligent Data Platform
Related Blogs
CFOs Discuss Their Technology Priorities
The CFO Viewpoint upon Data
How CFOs can change the conversation with their CIO?
New type of CFO represents a potent CIO ally
Competing on Analytics
The Business Case for Better Data Connectivity

Twitter: @MylesSuer

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, CIO, Data Governance, Data Quality | Tagged , , , , , | Leave a comment