Tag Archives: Analytics

Major Oil Company Uses Analytics to Gain Business Advantage

analytics case studies-GasAccording Michelle Fox of CNBC and Stephen Schork, the oil industry is in ‘dire straits’. U.S. crude posted its ninth-straight weekly loss this week, landing under $50 a barrel. The news is bad enough that it is now expected to lead to major job losses. The Dallas Federal Reserve anticipates that the Texas could lose about 125,000 jobs by the end of June. Patrick Jankowski, an economist and vice president of research at the Greater Houston Partnership, expects exploration budgets will be cut 30-35 percent, which will result in approximately 9,000 fewer wells being drilled. The problem is “if oil prices keep falling, at some point it’s not profitable to pull it out of the ground” (“When, and where, oil is too cheap to be profitable”, CNBC, John W. Schoen). job losses 

This means that a portion of the world’s oil supply will become unprofitable to produce. According to Wood Mackenzie, “once the oil price reaches these levels, producers have a sometimes complex decision to continue producing, losing money on every barrel produced, or to halt production, which will reduce supply”. The question are these the only answers?

Major Oil Company Uses Analytics to Gain Business Advantage

analytics case studiesA major oil company that we are working with has determined that data is a success enabler for their business. They are demonstrating what we at Informatica like to call a “data ready business”—a business that is ready for any change in market conditions. This company is using next generation analytics to ensure their businesses survival and to make sure they do not become what Jim Cramer likes to call a “marginal producer”.  This company has said to us that their success is based upon being able to extract oil more efficiently than its competitors.

Historically data analysis was pretty simple

analytics case studies-oil drillingTraditionally oil producers would get oil by drilling a new hole in the ground.  And in 6 months they would start getting the oil flowing commercially and be in business. This meant it would typically take them 6 months or longer before they could get any meaningful results including data that could be used to make broader production decisions.

Drilling from data

Today, oil is, also, produced from shale or fracking techniques.  This process can take only 30-60 days before oil producers start seeing results.  It is based not just on innovation in the refining of oil, but also on innovation in the refining of data from operational business decisions can be made. The benefits of this approach including the following:

Improved fracking process efficiency

analytics case studies-FrackingFracking is a very technical process. Producers can have two wells on the same field that are performing at very different levels of efficiency. To address this issue, the oil company that we have been discussing throughout this piece is using real-time data to optimize its oil extraction across an entire oil field or region. Insights derived from these allow them to compare wells in the same region for efficiency or productivity and even switch off certain wells if the oil price drops below profitability thresholds. This ability is especially important as the price of oil continues to drop.  At $70/barrel, many operators go into the red while more efficient data driven operators can remain profitable at $40/barrel.  So efficiency is critical across a system of wells.

Using data to decide where to build wells in the first place

When constructing a fracking or sands well, you need more information on trends and formulas to extract oil from the ground.  On a site with 100+ wells for example, each one is slightly different because of water tables, ground structure, and the details of the geography. You need the right data, the right formula, and the right method to extract the oil at the best price and not impact the environment at the very same time.

The right technology delivers the needed business advantage

analytics case studiesOf course, technology is never been simple to implement. The company we are discussing has 1.2 Petabytes of data they were processing and this volume is only increasing.  They are running fiber optic cables down into wells to gather data in real time. As a result, they are receiving vast amounts of real time data but cannot store and analyze the volume of data efficiently in conventional systems. Meanwhile, the time to aggregate and run reports can miss the window of opportunity while increasing cost. Making matters worse, this company had a lot of different varieties of data. It also turns out that quite of bit of the useful information in their data sets was in the comments section of their source application.  So traditional data warehousing would not help them to extract the information they really need. They decided to move to new technology, Hadoop. But even seemingly simple problems, like getting access to data were an issue within Hadoop.  If you didn’t know the right data analyst, you might not get the data you needed in a timely fashion. Compounding things, a lack of Hadoop skills in Oklahoma proved to be a real problem.

The right technology delivers the right capability

The company had been using a traditional data warehousing environment for years.  But they needed help to deal with their Hadoop environment. This meant dealing with the volume, variety and quality of their source well data. They needed a safe, efficient way to integrate all types of data on Hadoop at any scale without having to learn the internals of Hadoop. Early adopters of Hadoop and other Big Data technologies have had no choice but to hand-code using Java or scripting languages such as Pig or Hive. Hiring and retaining big data experts proved time consuming and costly. This is because data scientists and analysts can spend only 20 percent of their time on data analysis and the rest on the tedious mechanics of data integration such as accessing, parsing, and managing data. Fortunately for this oil producer, it didn’t have to be this way. They were able to get away with none of the specialized coding required to scale performance on distributed computing platforms like Hadoop. Additionally, they were able “Map Once, Deploy Anywhere,” knowing that even as technologies change they can run data integration jobs without having to rebuild data processing flows.

Final remarks

It seems clear that we live in an era where data is at the center of just about every business. Data-ready enterprises are able to adapt and win regardless of changing market conditions. These businesses invested in building their enterprise analytics capability before market conditions change. In this case, these oil producers will be able to produce oil at lower costs than others within their industry. Analytics provides three benefits to oil refiners.

  • Better margins and lower costs from operations
  • Lowers risk of environmental impact
  • Lower time to build a successful well

In essence, those that build analytics as a core enterprise capability will continue to have a right to win within a dynamic oil pricing environment.

Related links

Related Blogs

Analytics Stories: A Banking Case Study
Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform
Author Twitter: @MylesSuer

Share
Posted in Big Data, CIO, Data Quality | Tagged , , , | Leave a comment

Garbage In, Garbage Out? Don’t Take Data for Granted in Analytics Initiatives!

Cant trust data_1The verdict is in. Data is now broadly perceived as a source of competitive advantage. We all feel the heat to deliver good data. It is no wonder organizations view Analytics initiatives as highly strategic. But the big question is, can you really trust your data? Or are you just creating pretty visualizations on top of bad data?

We also know there is a shift towards self-service Analytics. But did you know that according to Gartner, “through 2016, less than 10% of self-service BI initiatives will be governed sufficiently to prevent inconsistencies that adversely affect the business”?1 This means that you may actually show up at your next big meeting and have data that contradicts your colleague’s data.  Perhaps you are not working off of the same version of the truth. Maybe you have siloed data on different systems and they are not working in concert? Or is your definition of ‘revenue’ or ‘leads’ different from that of your colleague’s?

So are we taking our data for granted? Are we just assuming that it’s all available, clean, complete, integrated and consistent?  As we work with organizations to support their Analytics journey, we often find that the harsh realities of data are quite different from perceptions. Let’s further investigate this perception gap.

For one, people may assume they can easily access all data. In reality, if data connectivity is not managed effectively, we often need to beg borrow and steal to get the right data from the right person. If we are lucky. In less fortunate scenarios, we may need to settle for partial data or a cheap substitute for the data we really wanted. And you know what they say, the only thing worse than no data is bad data. Right?

Another common misperception is: “Our data is clean. We have no data quality issues”.  Wrong again.  When we work with organizations to profile their data, they are often quite surprised to learn that their data is full of errors and gaps.  One company recently discovered within one minute of starting their data profiling exercise, that millions of their customer records contained the company’s own address instead of the customers’ addresses… Oops.

Another myth is that all data is integrated.  In reality, your data may reside in multiple locations: in the cloud, on premise, in Hadoop and on mainframe and anything in between. Integrating data from all these disparate and heterogeneous data sources is not a trivial task, unless you have the right tools.

And here is one more consideration to mull over. Do you find yourself manually hunting down and combining data to reproduce the same ad hoc report over and over again? Perhaps you often find yourself doing this in the wee hours of the night? Why reinvent the wheel? It would be more productive to automate the process of data ingestion and integration for reusable and shareable reports and Analytics.

Simply put, you need great data for great Analytics. We are excited to host Philip Russom of TDWI in a webinar to discuss how data management best practices can enable successful Analytics initiatives. 

And how about you?  Can you trust your data?  Please join us for this webinar to learn more about building a trust-relationship with your data!

  1. Gartner Report, ‘Predicts 2015: Power Shift in Business Intelligence and Analytics Will Fuel Disruption’; Authors: Josh Parenteau, Neil Chandler, Rita L. Sallam, Douglas Laney, Alan D. Duncan; Nov 21 2014
Share
Posted in Architects, Business/IT Collaboration, Data Governance, Data Integration, Data Warehousing | Tagged , , , , , , | 1 Comment

What Should Come First: Business Processes or Analytics?

business processesAs more and more businesses become fully digitized, the instantiation of their business processes and business capabilities becomes based in software. And when businesses implement software, there are choices to be made that can impact whether these processes and capabilities become locked in time or establish themselves as a continuing basis for business differentiation.

Make sure you focus upon the business goals

business processesI want to suggest that whether the software instantiations of business process and business capabilities deliver business differentiation depends upon whether business goals and analytics are successfully embedded in a software implementation from the start. I learned this first hand several years ago. I was involved in helping a significant insurance company with their implementation of analytics software. Everyone in the management team was in favor of the analytics software purchase. However, the project lead wanted the analytics completed after an upgrade had occurred to their transactional processing software. Fortunately, the firm’s CIO had a very different perspective. This CIO understood that decisions regarding the transaction processing software implementation could determine whether critical metrics and KPIs could be measured. So instead of doing analytics as an afterthought, this CIO had the analytics done as a fore thought. In other words, he slowed down the transactional software implementation. He got his team to think first about the goals for the software implementation and the business goals for the enterprise. With these in hand, his team determined what metrics and KPIs were needed to measure success and improvement. They then required the transaction software development team to ensure that the software implemented the fields needed to measure the metrics and KPIs. In some cases, this was as simple as turning on a field or training users to enter a field as the transaction software went live.

Make the analytics part of everyday business decisions and business processes

Tom DavenportThe question is how common is this perspective because it really matters. Tom Davenport says that “if you really want to put analytics to work in an enterprise, you need to make them an integral part of everyday business decisions and business processes—the methods by which work gets done” (Analytics at Work, Thomas Davenport, Harvard Business Review Press, page 121). For many, this means turning their application development on its head like our insurance CIO. This means in particular that IT implementation teams should no longer be about just slamming in applications. They need to be more deliberate. They need to start by identifying the business problems that they want to get solved through the software instantiation of a business process. They need as well to start with how they want to improve process by the software rather than thinking about getting the analytics and data in as an afterthought.

Why does this matter so much? Davenport suggests that “embedding analytics into processes improves the ability of the organization to implement new insights. It eliminates gaps between insights, decisions, and actions” (Analytics at Work, Thomas Davenport, Harvard Business Review Press, page 121). Tom gives the example of a car rental company that embedded analytics into its reservation system and was able with the data provided to expunge long held shared beliefs. This change, however, resulted in a 2% increased fleet utilization and returned $19m to the company from just one location.

Look beyond the immediate decision to the business capability

Davenport also suggests as well that enterprises need look beyond their immediate task or decision and appreciate the whole business process or what happens upstream or downstream. This argues that analytics be focused on the enterprise capability system. Clearly, maximizing performance of the enterprise capability system requires an enterprise perspective upon analytics. As well, it should be noted that a systems perspective allows business leadership to appreciate how different parts of the business work together as a whole. Analytics, therefore, allow the business to determine how to drive better business outcomes for the entire enterprise.

At the same time, focusing upon the enterprise capabilities system in many cases will overtime lead a reengineering of overarching business processes and a revamping of their supporting information systems. This allows in turn the business to capitalize on the potential of business capability and analytics improvement. From my experience, most organizations need some time to see what a change in analytics performance means. This is why it can make sense to start by measuring baseline process performance before determining enhancements to the business process. Once completed, however, refinement to the enhanced process can be determined by continuously measuring processes performance data.

Related links

Related Blogs

Analytics Stories: A Banking Case Study
Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform

Author Twitter: @MylesSuer

Share
Posted in B2B, B2B Data Exchange, Business Impact / Benefits, Business/IT Collaboration, Enterprise Data Management | Tagged , , , | Leave a comment

Data Visibility From the Source to Hadoop and Beyond with Cloudera and Informatica Integration

Data Visibility From the Source to Hadoop

Data Visibility From the Source to Hadoop

This is a guest post by Amr Awadallah, Founder, CTO at Cloudera, Inc.

It takes a village to build mainstream big data solutions. We often get so caught up in Hadoop use cases and customer successes that sometimes we don’t talk enough about the innovative partner technologies and integrations that enable our customers to put the enterprise data hub at the core of their data architecture and innovate with confidence. Cloudera and Informatica have been working together to integrate our products to enable new levels of productivity and lower deployment and production risk.

Going from Hadoop to an enterprise data hub, means a number of things. It means that you recognize the business value of capturing and leveraging all your data for exploration and analytics. It means you’re ready to make the move from Hadoop pilot project to production. And it means your data is important enough that it’s worth securing and making data pipelines visible. It’s the visibility layer, and in particular, the unique integration between Cloudera Navigator and Informatica that I want to focus on in this post.

The era of big data has ushered in increased regulations in a number of industries – banking, retail, healthcare, energy – most of which deal in how data is managed throughout its lifecycle. Cloudera Navigator is the only native end-to-end solution for governance in Hadoop. It provides visibility for analysts to explore data in Hadoop, and enables administrators and managers to maintain a full audit history for HDFS, HBase, Hive, Impala, Spark and Sentry then run reports on data access for auditing and compliance.The integration of Informatica Metadata Manager in the Big Data Edition and Cloudera Navigator extends this level of visibility and governance beyond the enterprise data hub.

Hadoop
Today, only Informatica and Cloudera provide end-to-end data lineage from source systems through Hadoop, and into BI/analytic and data warehouse systems. And you can view it from a single pane within Informatica.

This is important because Hadoop, and the enterprise data hub in particular, doesn’t function in a silo. It’s an integrated part of a larger enterprise-wide data management architecture. The better the insight into where data originated, where it traveled, who had access to it and what they did with it, the greater our ability to report and audit. No other combination of technologies provides this level of audit granularity.

But more so than that, the visibility Cloudera and Informatica provides our joint customers with the ability to confidently stand up an enterprise data hub as a part of their production enterprise infrastructure because they can verify the integrity of the data that undergirds their analytics. I encourage you to check out a demo of the Informatica-Cloudera Navigator integration at this link: http://infa.media/1uBpPbT

You can also check out a demo and learn a little more about Cloudera Navigator  and the Informatica integration in the recorded  TechTalk hosted by Informatica at this link:

http://www.informatica.com/us/company/informatica-talks/?commid=133311

Share
Posted in Big Data, Cloud Data Integration, Governance, Risk and Compliance, Hadoop | Tagged , , , , | Leave a comment

Should Analytics Be Focused on Small Questions Versus Big Questions?

AnalyticsShould the analytic resources of your company be focused upon small questions or big questions? For many, answering this question is not an easy one. Some find key managers preferring to make decisions from personal intuition or experience. When I worked for a large computer peripheral company, I remember executives making major decisions about product direction from their gut even when there was clear evidence that a major technology shift was about to happen. This company went from being a multi-billion dollar company to a $50 million dollar company in the matter of a few years.

In other cases, the entire company may not see the relationship between data and good decision making. When this happens, silos of the business collect data of value to them but there is not a coordinated, focused effort placed toward enterprise level strategic targets. This naturally leads to silos of analytical activity. Cleary answering small question may provide the value of having analytics quickly. However, answering the bigger questions will have the most value to the business as a whole. And while the big questions are often harder to answer, they can be pivotal to the go forward business. Here are just a few examples of the big questions that are worthy of being answered by most enterprises.

  • Which performance factors have the greatest impact on our future growth and profitability?
  • How can we anticipate and influence changing market conditions?
  • If customer satisfaction improves, what is the impact on profitability?
  • How should we optimize investments across our products, geographies, and market channels?

However, most businesses cannot easily answer these questions. Why then do they lack the analytical solutions to answer these questions?

Departmental BI does not yield strategic relevant data

Analytic- Business IntelligenceLet’s face it, business intelligence to data has largely been a departmental exercise. In most enterprises as we have been saying, analytics start as pockets of activity versus as an enterprise wide capability. The departmental approach leads business analysts to buy the same data or software that others in the organization have already bought. Enterprises end up with hundreds of data marts, reporting packages, forecasting tools, data management solutions, integration tools, and methodologies. According to Thomas Davenport, one firm he knows well has “275 data marts and thousand different information resources, but it couldn’t pull together a single view of the business in terms of key performance metrics and customer data” (Analytics at Work, Thomas Davenport, Harvard University Press, Page 47)

Clearly, answering the Big Questions requires leadership and a coordinated approach. Amazingly, taking this road often even reduces enterprise analytical expenditure as silos of information including data marts and spaghetti codes integrations are eliminated and replaced with a single enterprise capability. But if you want to take this approach how do you make sure that you get the right business questions answered?

Strategic Approach

Strategy Drives AnalyticsThe strategic approach starts with enterprise strategy.  In enterprise strategy,   leadership will define opportunities for business growth, innovation, differentiation, and marketplace impact. According to Derek Abell, this process should occur in a three cycle strategic planning approach. This approach has the enterprise doing business planning followed by functional planning, and lastly budgeting. Each cycle provides fodder for the stages that follow. For each stage, a set of overarching cascading objectives can be derived. From these, the businesses can define a set of critical success factors that will let it know whether or not business objectives are being met. Supporting each critical success factor are quantitative key performance indicators that in aggregate say whether the success factors are going to met. Finally, these key performance indicators derive the data that is needed to support the KPIs in terms of metrics or the supporting dimensional data for analysis. So the art and science here is defining critical success factors and KPIs that answer the big questions.

Core Capabilities

automationAs we saw above, the strategic approach is about tying questions to business strategy. In the capabilities approach, we tie questions to the capabilities that drive business competitive advantage. To determine these business capabilities, we need to start by looking at “the underling mechanism of value creation in the company (what they do best) and what the opportunities for meeting the market effectively. (“The Essential Advantage”, Paul Leinwand, Harvard Business Review Press, page 19). Typically, this determines 3-6 distinctive capabilities that impact the success of their enterprises service or product portfolio. These are the things that “enable your company to consistently outperform rivals” (“The Essential Advantage”, Paul Leinwand, Harvard Business Review Press, page 14). To optimize key business capabilities over time, and innovate and operate in ways that differentiate the businesses in the eyes and experience of customers (Analytics at Work, Thomas Davenport, Harvard University Press, Page 73). Here we want to target analytics investments at their distinctive capabilities. Here are some examples of potential target capabilities by industry:

  • Financial services: Credit scoring
  • Retail: Replenishment
  • Manufacturing: Supply Chain Optimization
  • Healthcare: Disease Management

Parting remarks

So as we have discussed, many firms are spending too much on analytic solutions that do not solve real business problems. Getting after this is not a technical issue—it is a business issue. It starts by asking the right business questions which can come from business strategy or your core business capabilities or some mix of each.

Related links

Related Blogs

Analytics Stories: A Banking Case Study
Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform

Author Twitter: @MylesSuer

Share
Posted in Big Data, CIO, Data Governance | Tagged , , , , , | Leave a comment

Gaining a Data-First Perspective with Salesforce Wave

Gaining a Data-First Perspective with Salesforce Wave

Data-First with Salesforce Wave

Salesforce.com made waves (pardon the pun) at last month’s Dreamforce conference when it unveiled the Salesforce Wave Analytics Cloud. You know Big Data has reached prime-time when Salesforce, which has a history of knowing when to enter new markets, decides to release a major analytics service.

Why now? Because companies need help making sense of the data deluge, Salesforce’s CEO Marc Benioff said at Dreamforce: “Did you know 90% of the world’s data was created in the last two years? There’s going to be 10 times more mobile data by 2020, 19 times more unstructured data, and 50 times more product data by 2020.” Average business users want to understand what that data is telling them, he said. Given Salesforce’s marketing expertise, this could be the spark that gets mainstream businesses to adopt the Data-First perspective I’ve been talking about.

As I’ve said before, a Data First POV shines a light on important interactions so that everyone inside a company can see and understand what matters. As a trained process engineer, I can tell you, though, that good decisions depend on great data — and great data doesn’t just happen: At the most basic level, you have to clean it, relate it, connect and secure it  — so that information from, say, SAP, can be viewed in the same context as data from Salesforce. Informatica obviously plays a role in this. If you want to find out more, click on this link to download our Salesforce Integration for Dummies brochure.

But that’s the basics for getting started. The bigger issue — and the one so many people seem to have trouble with — is deciding which metrics to explore. Say, for example, that the sales team keeps complaining about your marketing leads. Chances are, it’s a familiar complaint. How do you discover what’s really the problem?

One obvious place to start to first look at the conversation rates for every sales rep and group. Next explore the marketing leads they do accept such as deal size, product type or customer category. Now take it deeper. Examine which sales reps like to hunt for new customers and which prefer to mine their current base. That will tell you if you’re sending opportunities to the right profiles.

The key is never looking at the sales organization as a whole. If it’s EMEA, for instance, have a look to see how France is doing selling to emerging markets vs. the team in Germany. These metrics are digital trails of human behavior. Data First allows you to explore that behavior and either optimize it or change it.

But for this exploration to pay off, you actually have to do some of the work. You can’t just job it out to an analyst. This exercise doesn’t become meaningful until you are mentally engaged in the process. And that’s how it should be: If you are a Data First company, you have to be a Data First leader.

Share
Posted in Data Archiving, Data First, Data Governance, Data Integration | Tagged , , , , | Leave a comment

Analytics Stories: A Banking Case Study

Right to winAs I have shared within other post within this series, businesses are using analytics to improve their internal and external facing business processes and to strengthen their “right to win” within the markets that they operate. In banking, the right to win increasingly comes from improving two core sets of business capabilities—risk management and customer service.

Significant change has occurred in risk management over the last few years following theAnalytics subprime crisis and the subsequent credit crunch. These environmental changes have put increased regulatory pressure upon banks around the world. Among other things, banks need to comply with measures aimed at limiting the overvaluation of real estate assets and at preventing money laundering. A key element of handling these is to ensuring that go forward business decisions are made consistently using the most accurate business data available. It seems clear that data consistency can determine the quality of business operations especially business risk.

At the same time as banks need to strengthen their business capabilities around operations, and in particular risk management, they also need to use better data to improve the loyalty of their existing customer base.

Banco Popular launches itself into the banking vanguard

Banco Popular is an early responder regarding the need for better banking data consistency. Its leadership created a Quality of Information Office (the Office uniquely is not based within IT but instead with the Office of the President) with the mandate of delivering on two business objectives:

  1. Ensuring compliance with governmental regulations occurs
  2. Improving customer satisfaction based on accurate and up-to-date information

Part of the second objective is aimed at ensuring that each of Banco Popular’s customers was offered the ideal products for their specific circumstances. This is interesting because by its nature it assists in obtainment of the first objective. To validate it achieves both mandates, the Office started by creating an “Information Quality Index”. The Index is created using many different types of data relating to each of the bank’s six million customers–including addresses, contact details, socioeconomic data, occupation data, and banking activity data. The index is expressed in percentage terms, which reflects the quality of the information collected for each individual customer. The overarching target set for the organization is a score of 90 percent—presently, the figure sits at 75 percent. There is room to grow and improve!

Current data management systems limit obtainment of its business goals

Unfortunately, the millions of records needed by the Quality Information Office are spread across different tables in the organization’s central computing system and must be combined into one information file for each customer to be useful to business users. The problem is that they had depended on third parties to manually pull and clean up this data. This approach with the above mandates proved too slow to be executed in timely fashion. This, in turn, has impacted the quality of their business capabilities for risk and customer service. According to Banco Popular, their approach did not create the index and other analyses “with the frequency that we wanted and examining the variables of interest to us,” explains Federico Solana, an analyst at the Banco Popular Quality of Information Office.

Creating the Quality Index was just too time consuming and costly. But not improving data delivery performance had a direct impact on decision making.

Automation proves key to better business processes

TrustTo speed up delivery of its Quality Index, Banco Popular determined it needed to automate it’s creation of great data—data which is trustworthy and timely. According to Tom Davenport, “you can’t be analytical without data and you can’t be really good at analytics without really good data”. (Analytics at Work, 2010, Harvard Business Press, Page 23). Banco Popular felt that automating the tasks of analyzing and comparing variables would increase the value of data at lower cost and ensuring a faster return on data.

In addition to fixing the Quality Index, Banco Popular needed to improve its business capabilities around risk and customer service automation. This aimed at improving the analysis of mortgages while reducing the cost of data, accelerating the return on data, and boosting business and IT productivity.

Everything, however, needed to start with the Quality Index. After the Quality Index was created for individuals, Banco Popular created a Quality of Information Index for Legal Entities and is planning to extend the return on data by creating indexes for Products and Activities. For the Quality Index related to legal entities, the bank included variables that aimed at preventing the consumption of capital as well as other variables used to calculate the probability of underpayments and Basel models. Variables are classified as essential, required, and desirable. This evaluation of data quality allows for the subsequent definition of new policies and initiatives for transactions, the network of branches, and internal processes, among other aspects. In addition, the bank is also working on the in-depth analysis of quality variables for improving its critical business processes including mortgages.

Some Parting Remarks

In the end, Banco Popular has shown the way forward for analytics. In banking the measures of performance are often known, however, what is problematic is ensuring the consistency of decision making across braches and locations. By working first on data quality, Banco Popular ensured that the quality of data measures are consistent and therefore, it can now focus its attentions on improving underling business effectiveness and efficiency.

Related links

Related Blogs

Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform

Author Twitter: @MylesSuer

 

Share
Posted in CIO, Data Governance | Tagged , , , | 2 Comments

Who Owns Enterprise Analytics and Data?

processing dataWith the increasing importance of enterprise analytics, the question becomes who should own the analytics and data agenda. This question really matters today because, according to Thomas Davenport, “business processes are among the last remaining points of differentiation.” For this reason, Davenport even suggests that businesses that create a sustainable right to win use analytics to “wring every last drop of value from their processes”.

The CFO is the logical choice?

enterpriseIn talking with CIOs about both enterprise analytics and data, they are clear that they do not want to become their company’s data steward. They insist instead that they want to be an enabler of the analytics and data function. So what business function then should own enterprise analytics and data? Last week an interesting answer came from a CFO Magazine Article by Frank Friedman. Frank contends that CFOs are “the logical choice to own analytics and put them to work to serve the organization’s needs”.

To justify his position, Frank made the following claims:

  1. CFOs own most of the unprecedented quantities of data that businesses create from supply chains, product processes, and customer interactions
  2. Many CFOs already use analytics to address their organization’s strategic issues
  3. CFOs uniquely can act as a steward of value and an impartial guardian of truth across the organizations. This fact gives them the credibility and trust needed when analytics produce insights that effectively debunk currently accepted wisdom

Frank contends as well that owning the analytics agenda is a good thing because it allows CFOs to expand their strategic leadership role in doing the following:

  • Growing top line revenue
  • Strengthening their business ties
  • Expanding the CFO’s influence outside the finance function.

Frank suggests as well that analytics empowers the CFO to exercise more centralized control of operational business decision making. The question is what do other CFOs think about Frank’s position?

CFOs clearly have an opinion about enterprise analytics and data

A major Retail CFO says that finance needs to own “the facts for the organization”—the metrics and KPIs. And while he honestly admits that finance organizations in the past have not used data well, he claims finance departments need to make the time to become truly data centric. He said “I do not consider myself a data expert, but finance needs to own enterprise data and the integrity of this data”. This CFO claims as well that “finance needs to use data to make sure that resources are focused on the right things; decisions are based on facts; and metrics are simple and understandable”. A Food and Beverage CFO agrees with the Retail CFO by saying that almost every piece of data is financial in one way or another. CFOs need to manage all of this data since they own operational performance for the enterprise. CFOs should own the key performance indicators of the business.

CIOs should own data, data interconnect, and system selection

A Healthcare CFO said he wants, however, the CIO to own data systems, data interconnect, and system selection. However, he believes that the finance organization is the recipient of data. “CFOs have a major stake in data. CFOs need to dig into operational data to be able to relate operations to internal accounting and to analyze things like costs versus price”. He said that “the CFOs can’t function without good operational data”.

An Accounting Firm CFO agreed with the Healthcare CFO by saying that CIOs are a means to get data. She said that CFOs need to make sense out of data in their performance management role. CFOs, therefore, are big consumers of both business intelligence and analytics. An Insurance CFO concurred by saying CIOs should own how data is delivered.

CFOs should be data validators

Data AnalysisThe Insurance CFOs said, however, CFOs need to be validators of data and reports. They should, as a result, in his opinion be very knowledgeable on BI and Analytics. In other words, CFOs need to be the Underwriters Laboratory (UL) for corporate data.

Now it is your chance

So the question is what do you believe? Does the CFO own analytics, data, and data quality as a part of their operational performance role? Or is it a group of people within the organization? Please share your opinions below.

Related links

Solution Brief: The Intelligent Data Platform

Related Blogs

CFOs Move to Chief Profitability Officer
CFOs Discuss Their Technology Priorities
The CFO Viewpoint upon Data
How CFOs can change the conversation with their CIO?
New type of CFO represents a potent CIO ally
Competing on Analytics
The Business Case for Better Data Connectivity

Twitter: @MylesSuer

 

Share
Posted in CIO, Data First, Data Governance, Enterprise Data Management | Tagged , , , | 6 Comments

Analytics Stories: A Financial Services Case Study

As I indicated in my last case study regarding competing on analytics, Thomas H. Davenport believes “business processes are among the last remaining points of differentiation.” For this reason, Davenport contends that businesses that create a sustainable right to win use analytics to “wring every last drop of value from their processes”. For financial services, the mission critical areas needing process improvement center are around improving the consistency of decision making and making the management of regulatory and compliance more efficient and effective.

Why does Fannie Mae need to compete on analytics?

Fannie MaeFannie Mae is in the business of enabling people to buy, refinance, or rent homes. As a part of this, Fannie Mae says it is all about keeping people in their homes and getting people into new homes. Foundational to this mission is the accurate collection and reporting of data for decision making and risk management. According to Tracy Stephan at Fannie Mae, their “business needs to have the data to make decisions in a more real time basis. Today, this is all about getting the right data to the right people at the right time”.

Fannie Mae claims when the mortgage crisis hit, a lot of the big banks stopped lending and this meant that Fannie Mae among others needed to pick up the slack. Their action here, however, caused the Federal Government to require them to report monthly and quarterly against goals that the Federal Government set for it. “This meant that there was not room for error in how data gets reported”. In the end, Fannie Mae says three business imperatives drove it’s need to improve its reporting and its business processes:

  1. To ensure that go forward business decisions were made consistently using the most accurate business data available
  2. To avoid penalties by adhering to Dodd-Frank and other regulatory requirements established for it after the 2008 Global Financial Crisis
  3. To comply with reporting to Federal Reserve and Wall Street regarding overall business risk as a function of: data quality and accuracy, credit-worthiness of loans, and risk levels of investment positions.

Delivering required Fannie Mae to change how it managed data

AnalyticsGiven these business imperatives, IT leadership quickly realized it needed to enable the business to use data to truly drive better business processes from end to end of the organization. However, this meant enabling Fannie Mae’s business operations teams to more effectively and efficiently manage data. This caused Fannie Mae to determine that it needed a single source of truth whether it was for mortgage applications or the passing of information securely to investors. This need required Fannie Mae to establish the ability to share the same data across every Fannie Mae repository.

But there was a problem. Fannie Mae needed clean and correct data collected and integrated from more than 100 data sources. Fannie Mae determined that doing so with its current data processes could not scale. And as well, it determined that its data processes would not allow it to meet its compliance reporting requirements. At the same time, Fannie Mae needed to deliver more proactive management of compliance. This required that it know how critical business data enters and flows through each of its systems. This includes how data was changed by multiple internal processing and reporting applications. As well, Fannie Mae leadership felt that this was critical to ensure traceability to the individual user.

The solution

analyticsPer its discussions with business customers, Fannie Mae’s IT leadership determined that it needed to get real time, trustworthy data to improve its business operations and to improve its business processes and decision making. As said, these requirements could not be met with its historical approaches to integrating and managing data.

Fannie Mae determined that it needed to create a platform that was high availability, scalable, and largely automating its management of data quality management.  At the same time, the platform needed to provide the ability to create a set of business glossaries with clear data lineages. Fannie Mae determined it needed effectively a single source of truth across all of its business systems. According to Tracy Stephan, IT Director, Fannie Mae, “Data quality is the key to the success of Fannie Mae’s mission of getting the right people into the right homes. Now all our systems look at the same data – that one source of truth – which gives us great comfort.” To learn more specifics about how Fannie Mae improved its business processes and demonstrated that it is truly “data driven”, please click on this video of their IT leadership.

Related links
Solution Brief: The Intelligent Data Platform
Related Blogs
Thomas Davenport Book “Competing On Analytics”
Competing on Analytics
The Business Case for Better Data Connectivity
The CFO Viewpoint upon Data
What an enlightened healthcare CEO should tell their CIO?

Twitter: @MylesSuer

Share
Posted in CIO, Financial Services | Tagged , , , | Leave a comment