Myles Suer

Myles Suer
Mr. Suer is a senior manager of solutions marketing at Informatica Corporation. Much of Mr. Suer’s experience has been as a BI practitioner. At HP and Peregrine, Mr. Suer led the product management team applying BI and Scorecard technology to these company’s IT management products. Prior to HP, Mr. Suer led new product initiatives at start-ups and large companies. This included doing a restart of a Business Activity Monitoring Company. Mr. Suer has, also, been a software industry analyst. Mr. Suer holds a Master of Science degree from UC Irvine and a 2nd Masters in Business Administration in Strategic Planning from the University of Southern California.

How is Predictive Analytics Driving Business Performance?

QuestionRecently, I got to attend the Predictive Analytics Summit in San Diego. It felt great to be in a room full of data scientists from around the world—all my hidden statistics, operations research, and even modeling background came back to me instantly. I was most interested to learn what this vanguard was doing as well as any lessons learned that could be shared with the broader analytics audience. Presenters ranged from Internet leaders to more traditional companies like Scotts Miracle Gro. Brendan Hodge of Scotts Miracle Gro in fact said, as 125 year old company, he feels like “a dinosaur at a mammal convention”. So in the space that follows, I will share my key take-aways from some of the presenters.

Fei Long from 58.com

58.com is the Craigslist, Yelp, and Monster of China. Fei shared that 58.com is using predictive analytics to recommend resumes to employers and to drive more intelligent real time bidding for its products. Fei said that 58.com has 300 million users—about the number of people in the United States. Most interesting, Fei said that predictive analytics has driven a 10-20% increase in 58.com’s click through rate.

Ian Zhao from eBay

Ian said that eBay is starting to increase the footprint of its data science projects. He said that historical the focus for eBay’s data science was marketing, but today eBay is applying data science to sales and HR. Provost and Fawcett agree in “Data Science for Business” by saying that “the widest applications of data mining techniques are in marketing for tasks such as target marketing, online advertising, and recommendations for cross-selling”.

Ian said that in the non-marketing areas, they are finding a lot less data. The data is scattered across data sources, and requires a lot more cleansing. Ian is using things like time series and ARIMA to look at employee attrition. One thing that Ian found that was particularly interesting is that there is strong correlation between attrition and bonus payouts. Ian said it is critical to leave ample time for data prep. He said that it is important to start the data prep process by doing data exploration and discovery. This includes confirming that data is available for hypothesis testing. Sometimes, Ian said that this the data prep process can include inputting data that is not available in the data set and validating data summary statistics. With this, Ian said that data scientists need to dedicate time and resources for determining what things are drivers. He said with the business, data scientist should talk about likelihood because business people in general do not understand statistics. It is important as well that data scientist ask business people the so what questions. Data scientist should narrow things down to a dollar impact.

Barkha Saxena from Poshmark

Barkha is trying to model the value of user growth. Barkha said that this matters because Poshmark wants to be the #1 community driven marketplace. They want to use data to create a “personal boutique experience”. With 700,000 transactions a day, they are trying to measure customer lifetime value by implementing a cohort analysis. What was the most interesting in Barkha’s data is she discovered repeatable performance across cohorts. In their analysis, different models work better based upon the data—so a lot of time goes into procedurally determining the best model fit.

Meagan Huth from Google

Meagan said that Google is creating something that they call People Analytics. They are trying to make all people decisions by science and data. They want to make it cheaper and easier to work at Google. They have found through their research that good managers lower turnover, increase performance, and increase workplace happiness. The most interesting thing that she says they have found is the best predictor of being a good manager is being a good coach. They have developed predictive models around text threads including those that occur in employee surveys to ensure they have the data to needed to improve.

Hobson Lane from Sharp Labs

Hobson reminded everyone of the importance Nyquist (you need to sample data twice as fast as the fastest data event). This is especially important for organizations moving to the so called Internet of Things. Many of these devices have extremely large data event rates. Hobson, also, discussed the importance of looking at variance against the line that gets drawn in a regression analysis. Sometimes, multiple lines can be drawn. He, also, discussed the problem of not having enough data to support the complexity of the decision that needs to be made.

Ravi Iyer from Ranker

Ravi started by saying Ranker is a Yelp for everyone else. He then discussed the importance of have systematic data. A nice quote from him is as follows:  “better data=better predictions”. Ravi discussed as well the topic of response bias. He said that asking about Coke can lead to different answer when you ask about Coke or Coke at a movie. He discussed interesting how their research shows that millennials are really all about “the best”. I see this happening every time that I take my children out to dinner—there is no longer a cheap dinner out.

Ranjan Sinha at eBay

Ranjan discussed the importance of customer centric commerce and creating predictive models around it. At eBay, they want to optimize the customer experience and improve their ability to make recommendations. eBay is finding customer expectations are changing. For this reason, they want customer context to be modeled by looking at transactions, engagement, intent, account, and inferred social behaviors. With modeling completed, they are using complex event processing to drive a more automated response to data. An amazing example given was for Valentine Day’s where they use a man’s partner’s data to predict the items that the man should get for his significant other.

Andrew Ahn from LinkedIn

Andrew is using analytics to create what he calls an economic graph and to make professionals more productive. One area that he personally is applying predictive analytics to is with LinkedIn’s sales solutions. In LinkedIn Sales Navigator, they display potential customers based upon the sales person’s demographic data—effectively the system makes lead recommendations. However, they want to de-risk this potential interaction for sale professionals and potential customers. Andrews says at the same time that they have found through data analysis that small changes in a LinkedIn profile can lead to big changes. To put this together, they have created something that they call the social selling index. It looks at predictors that they have determined are statistically relevant including member demographic, site engagement, and social network. The SSI score is viewed as a predictive index. Andrew says that they are trying to go from serendipity to data science.

Robert Wilde from Slacker Radio

Robert discussed the importance of simplicity and elegance in model building. He then went through a set of modeling issues to avoid. He said that modelers need to own the discussion of causality and cause and effect and how this can bias data interpretation. In addition, looking at data variance was stressed because what does one do when a line doesn’t have a single point fall on it. Additionally, Robert discussed  what do you do when correlation is strong, weak, or mistaken. Is it X or Y that has the relationship. Or worse yet what do you do when there is coincidental correlation. This led to a discussion of forward and reverse causal inference. For this reason, Robert argued strongly for principal component analysis. This eliminates regression causational bias. At the same time, he suggested that models should be valued by complexity versus error rates.

Parsa Bakhtary from Facebook

Parsa has been looking at what games generate revenue and what games do not generate revenue for Facebook—Facebook amazingly has over 1,000 revenue bearing game. For this reason, Facebook wants to look at the Lifetime Value of Customers for Facebook Games—ithe dollar value of a relationship. Parsa said, however, there is a problem, only 20% pay for their games. Parsa argued that customer life time value (which was developed in the 1950s) doesn’t really work for apps where everyones lifetime is not the same. Additionally, social and mobile gamers are not particularly loyalty. He says that he, therefore, has to model individual games for their first 90 days across all periods of joining and then look at the cumulative revenue curves.

Predictive AnalyticsParting remarks

So we have seen here a wide variety of predictive analytics techniques being used by today’s data scientists. To me this says that predictive analytical approaches are alive and kicking. This is good news and shows that data scientists are trying to enable businesses to make better use of their data. Clearly, a key step that holds data scientist back today is data prep. While it is critical to leave ample time for data prep, it is also essential to get quality data to ensure models are working appropriately. At the same time, data prep needs to support inputting data that is not available within the original data set.

Related links

Solution Brief: Data Prep

Big Data: The Emperor may have their Clothes on but…

Should We Still be Calling it Big Data?

Big Data Integration

Big Data Decision Automation

Data Mastering

Data Lake + Analysis

Related Blogs

Analytics Stories: A Banking Case Study

Analytics Stories: A Pharmaceutical Case Study

Analytics Stories: An Educational Case Study

Analytics Stories: A Financial Services Case Study

Analytics Stories: A Healthcare Case Study

Major Oil Company Uses Analytics to Gain Business Advantage

Is it the CDO or CAO or Someone Else?

Should We Still be Calling it Big Data?

What Should Come First: Business Processes or Analytics?

Should Analytics Be Focused on Small Questions Versus Big Questions?

Who Owns Enterprise Analytics and Data?

Competing on Analytics

Is Big Data Destined To Become Small And Vertical?

Big Data Why?

What is big data and why should your business care?

Author Twitter: @MylesSuer

 

 

 

Share
Posted in 5 Sales Plays, Big Data, CIO, Complex Event Processing | Tagged | Leave a comment

Financial Transformation is Really About Fixing the Business

clean data

Clean Data

According to Strategy and Business, the “CFO role is expanding to include being the company’s premier champion of strategic discipline.” It is no wonder that financial transformations are so much in vogue these days. According to The Conference Board, 81% of the companies that it surveyed are involved in a major financial transformation initiative. However, only 27% of these firms claim that they had achieved the benefits that were defined within their business case. Of the reasons for failure, the most interesting is thinking the transformation would be some kind of big bang. The problem is this type of thinking is unrealistic for today’s hyper competitive business environment. Financial strategy today needs be an enabler of business strategy. This means that it needs to be able to support the increasingly shorter duration of business strategy.

Financial Transformation needs to Enable Business Agility

Financial Transformation

Financial Transformation needs to Enable Business Agility

I have discovered the same thing in my discussions with IT organizations. In other words, enabling business strategies increasingly need to be built with a notion of agility. This means for financial strategies that they to need to first and foremost make organizations more agile and enable more continuous business change. Think about the impact of an organization that has as part of its strategy inorganic acquisition. This all means that thinking that a multi-year ERP implementations will on it’s own deliver financial transformation alone is unrealistic.

While it is absolutely fair to determine what at the manual tasks financial teams can eliminate, it does not make sense to think that they are done once an ERP implementation is completed. Recently, I was talking with a large accounting consulting and integration firm, they let me know that they really liked doing large ERP implementations and re-implementations, but they also knew that they would soon break under the weight of financial and business change unless flexibility was built in from the start. Financial transformation must start by creating business flexibility and agility to work in today’s business environment.

Does Your Close Process Get in the Way?

Financial Transformation

Best Practices

But achieving better financial agility and profitability improvement capabilities is often limited by the timeliness, trustworthiness of data. This why CFOs say that they spend so much of their time on the close process. According to the MIT CFO Summit Survey, nearly half of the organizations surveyed are feeling pressure from senior leadership to become more data driven and analytical. Data clearly limits the finance function ability to guide corporate executives, business-unit managers, and sales and marketing functions in ways to ensures business profitable and growth.

Financial Transformations Need to Fit Business Strategy

Financial Transformations

Financial Transformations Need to Fit Business Strategy

At the same time, it cannot be stressed enough that successful financial transformations need to be designed to fit with the company’s larger business strategy. The Conference Board suggests financial organizations should put real emphasis upon transformations that grow the business. Jonathan Brynes at the MIT Sloan School has suggested “the most important issue facing most managers …is making more money from their existing businesses without costly new initiatives”. In Brynes’ cross industry research, he found that 30% or higher of each company’s businesses are unprofitable. Brynes claims these business losses are offset by what are “islands of high profitability”. The root cause of this issue he asserts is the inability of current financial and management control systems to surface profitability problems and opportunities for investment to accelerate growth. For this reason, financial transformations should as a business goal make it easier to evaluate business profitability.

In a survey from CFO magazine, they found that nearly all the survey respondents said their companies are striving to improve profitability over the next year. 87% said their companies needed to analyze financial and performance data much more quickly if they were to meet business targets. However, only 12% said their finance organizations can respond to requests for financial reports and analysis from business managers in real or near-real time. At the same time, business managers are expecting finance staff to be able to tell the story behind the numbers — to integrate financial and operational data in ways that get at the drivers of improvement.

We Are Talking About More than Financial Decision Making

This means not just worrying about financial decision making, but ensuring that the right questions and the right insights are being provided for the business. As Geoffrey Moore has indicated economies of scale and market clout are no longer the formidable barriers to entry that they once were. The allocation of resources must be focused on a company’s most distinctive business capabilities—those things that provide the enterprise its “right to win”. To be a strategic, CFOs need to become a critical champion of the capabilities system, making sure it gets the investment and consideration it needs. This accentuates your ongoing role as a voice of reason in M&A—favoring acquisitions that fit well with the company’s capabilities system, and recommending divestitures of products and services that don’t.

Parting Remarks

Today, the CFO role is being transformed to increasingly be a catalyst for change. This involves increasingly helping companies focus upon the business capabilities that drive value. CFOs are uniquely positioned to take on this challenge. They are the company leader that combines strategic insight with a line of sight into business execution. Moreover, unlike other change agents, CFOs have the power of the purse. However, to do this their financial transformations need to ensure business agility and improve their and the businesses ability to get and use data.

Related links:

CFOs Move to Chief Profitability Officer

CFOs Discuss Their Technology Priorities

The CFO Viewpoint upon Data

How CFOs can change the conversation with their CIO?

New type of CFO represents a potent CIO ally

Competing on Analytics

The Business Case for Better Data Connectivity

Twitter: @MylesSuer

Share
Posted in B2B, Banking & Capital Markets | Tagged , | Leave a comment

Big Data: The Emperor may have their Clothes on but…

emperorToday, I am going to share on what few others have so far been willing to share in public regarding big data. Before doing so, I need to first bring you up to speed on what I have already written on the topic. I shared previously that the term big data probably is not very useful or descriptive. Thomas Davenport, the author of Competing on Analytics, has found as well that “over 80 percent of the executives surveyed thought the term was overstated, confusing, or misleading”. At the same time, the CIOs that I have talking to have suggested that I tell our reps never to open a meeting with the big data topic but instead to talk about the opportunity to relate the volumes of structured data to even larger volumes of unstructured data.

I believe that the objective of these discussions increasingly should be to discover how it is possible to solve even bigger and meatier business problems. It is important, as I said recently; to take a systems view of big data and this includes recognizing that business stakeholders will not use any data unless it is trustworthy, regardless of cost. Having made these points previously, I would like to bring to the forefront another set of issues that businesses should consider before beginning a big data implementation. I have come to my point of view here by listening to the big data vanguard. Many of these early adopters have told me that they jumped onto the big data bandwagon because they heard big data would be cheaper than traditional business intelligence implementations.

However, these enterprises soon discovered that they couldn’t leer away from their Silicon Valley jobs those practiced in the fine arts of HADOOP and MapReduce. They found as well that hand coding approaches and the primitive data collection tools provided by the HADOOP vendors were not ready for prime time and did not by themselves save cost. These early pioneers found that they needed a way to automate the movement and modification of data for analysis. What they determine was needed is an automated, non-script based way to pull and transform the data that populates their HADOOP or other big data type systems. This included real time solutions like Hana and Vertica.

Big Data Flow

A new architecture for business intelligence

But as I have looked further into the needs of these early adopters, it became clear that they needed an architecture that could truly manage their end to end business intelligence requirements. They needed an architecture that would handle their entire data use lifecycle from the collection, inspection, connection, perfection, and protection of data.

Architecture requires a Data Lake

Obviously, I have already been discussing what could be called the collection phase. But to be clear, big data should be just one element of a larger collection scheme. No one is suggesting for example that existing business intelligence systems be replaced in wholesale fashion with the so called newer approaches. Given this, business architecture needs to start by establishing a data lake approach that over arches the new data storage approaches and effectively sits side by side with existing business intelligence assets.

Data discovery starts by testing data relationships

Once new forms of data are collected using HADOOP or other forms of big data storage within an overarching data lake, users and analysts need to inspect the data collected as whole and surface interrelationships with new and existing forms of data. What is needed in addition to data movement is a lake approach to deploy data and evaluate data relationships. Today, this involves enabling business intelligence users to self-service. One CIO that heard about this lit up and said “this is like orchestration. Users can assembly data and put it together and do it from different sources at different times. It doesn’t just have to be a preconceived process.”

Data Enrichment enables business decision making

Historically users needed to know what data they wanted for analyze prior to building a business intelligence system. An advantage of HADOOP plus an overarching data lake is that you can put data in a place prior to knowing if the data has an interesting business use case or not. Once data is captured, it needs tooling to evaluate and put together data and test the strength of potential data relationships. This includes enabling business users to evaluate the types of analytics that could potentially have value to them. I shared recently on just how important it was to visualize data in a way that culturally fits and derives the most potential business value.

Once data has been evaluated and relevant data relationships have been determined, then it is important to have a way to siphon off data that has been determined to have potential business interest and do what you always did to this data. This includes adding meaningful additional structure and relationship to the data and fixing the quality of data that needs to be related and created within an analytic. This can include things like data mastering. This can mean that one of two things takes place. First is data relationships are extended and data quality and consistency are improved. In this data perfection stage, it can for finance mean integrating and then consolidating data for a total view of the financial picture. For marketing people, it can involving creating an integrated customer record fusing together existing customer master data with external customer datasets to improve cross sell and customer service. With this accomplished it becomes an analysis decision and a cost decision whether data continues to be housed in HADOOP or managed in an existing traditional data warehouse structure.

Valuable data needs to be protected

Once data is created that can be used for business decision making, then we need to take the final step of protecting the data that often cost millions of dollars to create, refine, and analyze. Recently, I was with a CIO and asked about various hacks that have captured so much media attention. This CIO said that the CIOs at the companies that had been hacked were not stupid. It is hard to justify the business value of protecting the value of the data that has been created. It seems clear to me at least that we need to protect data as an asset and as well the external access to it given the brand and business impacts of being hacked.

Parting remarks

It seems clear that we need an architecture that is built to last and deliver sustaining value to the business. So here is the cycle again–collection, inspection, connection, perfection, and protection of data. Each step matters to big data but as well to the data architecture that big data is adding onto.

Related links

Big Data Integration

Big Data Decision Automation

Data Mastering

Data Lake + Analysis

Related Blogs

Analytics Stories: A Pharmaceutical Case Study

Analytics Stories: An Educational Case Study

Analytics Stories: A Financial Services Case Study

Analytics Stories: A Healthcare Case Study

Major Oil Company Uses Analytics to Gain Business Advantage

Is it the CDO or CAO or Someone Else?

Should We Still be Calling it Big Data?

What Should Come First: Business Processes or Analytics?

Should Analytics Be Focused on Small Questions Versus Big Questions?

Who Owns Enterprise Analytics and Data?

Competing on Analytics

Is Big Data Destined To Become Small And Vertical?

Big Data Why?

What is big data and why should your business care?

Author Twitter: @MylesSuer

Share
Posted in Big Data | Tagged , | Leave a comment

Being a CIO Today Requires More Than a Service Broker Orientation

cloud service brokerIn my discussions with CIOs, their opinions differ widely about the go forward nature of the CIO role. While most feel the CIO role will remain an important function, they also feel a sea state change is in process. According to Tim Crawford, a former CIO and strategic advisor to CIOs, “CIOs are getting out of the data center business”. In my discussions, not all yet see the complete demise for their data centers. However, it is becoming more common for CIOs to see themselves “becoming an orchestrator of business services versus a builder of new operational services”. One CIO put it this way, “the building stuff is now really table stakes. Cloud and loosely oriented partnerships are bringing vendor management to the forefront”.

cloud service brokerAs more and more of the service portfolio are provided by third parties in either infrastructure as a service (IaaS) or software as a service (SaaS) modes, the CIO needs to take on what will become an increasingly important role –the service broker. An element of the service broker role that will have increasingly importance is the ability to glue together business systems w6hether they are on premise, cloud managed (Iaas), or software as a service (Saas). Regardless of who creates or manages the applications of the enterprise, it is important to remember that integration is to a large degree the nervous system that connects applications into business capabilities. As such, the CIO’s team has a critical and continuing role in managing this linkage. For example, spaghetti code integrations can easily touch 20 or more systems for ERP or expense management systems.

Brokering integration services

As CIOs start to consider the move to cloud, they need to determine how this nervous system is connected, maintained, and improved. In particular, they need to determine maybe for the first time how to integrate their cloud systems to the rest of their enterprise systems. They clearly can continue to do so by building and maintaining hand coding or by using their existing ETL tools. This can work where one takes on an infrastructure as a service model. But it falls apart when looking at the total cost of ownership of managing the change of a SaaS model. This fact begs an interesting question. Shouldn’t the advantages of SaaS occur as well for integration? Shouldn’t there be Cloud Data Management (Integration as a Service)options? The answer is yes. Instead of investing in maintain integrations of SaaS systems which because of agile methodologies can change more frequently than traditional software development, couldn’tsomeone else manage this mess for me.

The advantage of the SaaS model is total cost of ownership and faster time to value. Instead of managing, integration between SaaS and historical environments, the integration between SaaS applications and historical applications can be maintained by the cloud data Management vendor. This would save both  cost and time. As well, it would free you to focus your team’s energy upon cleaning up the integrations between historical systems and each other. This is a big advantage for organizations trying to get on the SaaS bandwagon but not incur significantly increased costs as a result.

Definitions please

Infrastructure as a Service (IaaS)—Provides processor, databases, etc. remotely but you control and maintain what goes on them

Software as a Service (Saas)—Provides software applications and underling infrastructure as a Service

Cloud Data Management—Provides Integration of applications in particular SaaS applications as a service

Parting remarks

CIOs are embarking upon big changes. Building stuff is becoming less and less relevant. However, even as more and more services are managed remotely (even by other parties), it remains critical that CIOs and their teams manage the glue between applications. With SaaS application in particular, this is where Cloud Data Management can really help you control integrations with less time and cost.

Related links

Solution Brief:

Hybrid IT Integration

Related Blogs

What is the Role of the CIO in Driving Enterprise Analytics?

IT Is All About Data!

The Secret To Being A Successful CIO

Driving IT Business Alignment: One CIOs Journey

How Is The CIO Role Starting To Change?

The CIO Challenged

Author Twitter: @MylesSuer

 

Share
Posted in Cloud Application Integration | Tagged , , | Leave a comment

Is it the CDO or CAO or Someone Else?

Frank-Friedman-199x300A month ago, I shared that Frank Friedman believes CFOs are “the logical choice to own analytics and put them to work to serve the organization’s needs”. Even though many CFOs are increasingly taking on what could be considered an internal CEO or COO role, many readers protested my post which focused on reviewing Frank Friedman’s argument. At the same time, CIOs have been very clear with me that they do not want to personally become their company’s data steward. So the question becomes should companies be creating a CDO or CAO role to lead this important function? And if yes, how common are these two roles anyway?

Data analyticsRegardless of eventual ownership, extracting value out of data is becoming a critical business capability. It is clear that data scientists should not be shoe horned into the traditional business analyst role. Data Scientists have the unique ability to derive mathematical models “for the extraction of knowledge from data “(Data Science for Business, Foster Provost, 2013, pg 2). For this reason, Thomas Davenport claims that data scientists need to be able to network across an entire business and be able to work at the intersection of business goals, constraints, processes, available data and analytical possibilities. Given this, many organizations today are starting to experiment with the notion of having either a chief data officers (CDOs) or chief analytics officers (CAOs). The open questions is should an enterprise have a CDO or a CAO or both? And as important in the end, it is important to determine where should each of these roles report in the organization?

Data policy versus business questions

Data analyticsIn my opinion, it is the critical to first look into the substance of each role before making a decision with regards to the above question. The CDO should be about ensuring that information is properly secured, stored, transmitted or destroyed.  This includes, according to COBIT 5, that there are effective security and controls over information systems. To do this, procedures need to be defined and implemented to ensure the integrity and consistency of information stored in databases, data warehouses, and data archives. According to COBIT 5, data governance requires the following four elements:

  • Clear information ownership
  • Timely, correct information
  • Clear enterprise architecture and efficiency
  • Compliance and security

Data analyticsTo me, these four elements should be the essence of the CDO role. Having said this, the CAO is related but very different in terms of the nature of the role and the business skills require. The CRISP model points out just how different the two roles are. According to CRISP, the CAO role should be focused upon business understanding, data understanding, data preparation, data modeling, and data evaluation. As such the CAO is focused upon using data to solve business problems while the CDO is about protecting data as a business critical asset. I was living in in Silicon Valley during the “Internet Bust”. I remember seeing very few job descriptions and few job descriptions that existed said that they wanted a developer who could also act as a product manager and do some marketing as a part time activity. This of course made no sense. I feel the same way about the idea of combining the CDO and CAO. One is about compliance and protecting data and the other is about solving business problems with data. Peanut butter and chocolate may work in a Reese’s cup but it will not work here—the orientations are too different.

So which business leader should own the CDO and CAO?

Clearly, having two more C’s in the C-Suite creates a more crowded list of corporate officers. Some have even said that this will extended what is called senior executive bloat. And what of course how do these new roles work with and impact the CIO? The answer depends on organization’s culture, of course. However, where there isn’t an executive staff office, I suggest that these roles go to different places. Clearly, many companies already have their CIO function already reporting to finance. Where this is the case, it is important determine whether a COO function is in place. The COO clearly could own the CDO and CAO functions because they have a significant role in improving process processes and capabilities. Where there isn’t a COO function and the CIO reports to the CEO, I think you could have the CDO report to the CIO even though CIOs say they do not want to be a data steward. This could be a third function in parallel the VP of Ops and VP of Apps. And in this case, I would put the CAO report to one of the following:  the CFO, Strategy, or IT. Again this all depends on current organizational structure and corporate culture. Regardless of where it reports, the important thing is to focus the CAO on an enterprise analytics capability.

Related Blogs

Should we still be calling it Big Data?

Is Big Data Destined To Become Small And Vertical?

Big Data Why?

What is big data and why should your business care?

Author Twitter: @MylesSuer

Share
Posted in Big Data, CIO | Tagged , , , , , , | 1 Comment

Major Oil Company Uses Analytics to Gain Business Advantage

analytics case studies-GasAccording Michelle Fox of CNBC and Stephen Schork, the oil industry is in ‘dire straits’. U.S. crude posted its ninth-straight weekly loss this week, landing under $50 a barrel. The news is bad enough that it is now expected to lead to major job losses. The Dallas Federal Reserve anticipates that the Texas could lose about 125,000 jobs by the end of June. Patrick Jankowski, an economist and vice president of research at the Greater Houston Partnership, expects exploration budgets will be cut 30-35 percent, which will result in approximately 9,000 fewer wells being drilled. The problem is “if oil prices keep falling, at some point it’s not profitable to pull it out of the ground” (“When, and where, oil is too cheap to be profitable”, CNBC, John W. Schoen). job losses 

This means that a portion of the world’s oil supply will become unprofitable to produce. According to Wood Mackenzie, “once the oil price reaches these levels, producers have a sometimes complex decision to continue producing, losing money on every barrel produced, or to halt production, which will reduce supply”. The question are these the only answers?

Major Oil Company Uses Analytics to Gain Business Advantage

analytics case studiesA major oil company that we are working with has determined that data is a success enabler for their business. They are demonstrating what we at Informatica like to call a “data ready business”—a business that is ready for any change in market conditions. This company is using next generation analytics to ensure their businesses survival and to make sure they do not become what Jim Cramer likes to call a “marginal producer”.  This company has said to us that their success is based upon being able to extract oil more efficiently than its competitors.

Historically data analysis was pretty simple

analytics case studies-oil drillingTraditionally oil producers would get oil by drilling a new hole in the ground.  And in 6 months they would start getting the oil flowing commercially and be in business. This meant it would typically take them 6 months or longer before they could get any meaningful results including data that could be used to make broader production decisions.

Drilling from data

Today, oil is, also, produced from shale or fracking techniques.  This process can take only 30-60 days before oil producers start seeing results.  It is based not just on innovation in the refining of oil, but also on innovation in the refining of data from operational business decisions can be made. The benefits of this approach including the following:

Improved fracking process efficiency

analytics case studies-FrackingFracking is a very technical process. Producers can have two wells on the same field that are performing at very different levels of efficiency. To address this issue, the oil company that we have been discussing throughout this piece is using real-time data to optimize its oil extraction across an entire oil field or region. Insights derived from these allow them to compare wells in the same region for efficiency or productivity and even switch off certain wells if the oil price drops below profitability thresholds. This ability is especially important as the price of oil continues to drop.  At $70/barrel, many operators go into the red while more efficient data driven operators can remain profitable at $40/barrel.  So efficiency is critical across a system of wells.

Using data to decide where to build wells in the first place

When constructing a fracking or sands well, you need more information on trends and formulas to extract oil from the ground.  On a site with 100+ wells for example, each one is slightly different because of water tables, ground structure, and the details of the geography. You need the right data, the right formula, and the right method to extract the oil at the best price and not impact the environment at the very same time.

The right technology delivers the needed business advantage

analytics case studiesOf course, technology is never been simple to implement. The company we are discussing has 1.2 Petabytes of data they were processing and this volume is only increasing.  They are running fiber optic cables down into wells to gather data in real time. As a result, they are receiving vast amounts of real time data but cannot store and analyze the volume of data efficiently in conventional systems. Meanwhile, the time to aggregate and run reports can miss the window of opportunity while increasing cost. Making matters worse, this company had a lot of different varieties of data. It also turns out that quite of bit of the useful information in their data sets was in the comments section of their source application.  So traditional data warehousing would not help them to extract the information they really need. They decided to move to new technology, Hadoop. But even seemingly simple problems, like getting access to data were an issue within Hadoop.  If you didn’t know the right data analyst, you might not get the data you needed in a timely fashion. Compounding things, a lack of Hadoop skills in Oklahoma proved to be a real problem.

The right technology delivers the right capability

The company had been using a traditional data warehousing environment for years.  But they needed help to deal with their Hadoop environment. This meant dealing with the volume, variety and quality of their source well data. They needed a safe, efficient way to integrate all types of data on Hadoop at any scale without having to learn the internals of Hadoop. Early adopters of Hadoop and other Big Data technologies have had no choice but to hand-code using Java or scripting languages such as Pig or Hive. Hiring and retaining big data experts proved time consuming and costly. This is because data scientists and analysts can spend only 20 percent of their time on data analysis and the rest on the tedious mechanics of data integration such as accessing, parsing, and managing data. Fortunately for this oil producer, it didn’t have to be this way. They were able to get away with none of the specialized coding required to scale performance on distributed computing platforms like Hadoop. Additionally, they were able “Map Once, Deploy Anywhere,” knowing that even as technologies change they can run data integration jobs without having to rebuild data processing flows.

Final remarks

It seems clear that we live in an era where data is at the center of just about every business. Data-ready enterprises are able to adapt and win regardless of changing market conditions. These businesses invested in building their enterprise analytics capability before market conditions change. In this case, these oil producers will be able to produce oil at lower costs than others within their industry. Analytics provides three benefits to oil refiners.

  • Better margins and lower costs from operations
  • Lowers risk of environmental impact
  • Lower time to build a successful well

In essence, those that build analytics as a core enterprise capability will continue to have a right to win within a dynamic oil pricing environment.

Related links

Related Blogs

Analytics Stories: A Banking Case Study
Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform
Author Twitter: @MylesSuer

Share
Posted in Big Data, CIO, Data Quality | Tagged , , , | Leave a comment

Should We Still be Calling it Big Data?

shutterstock_227687962Several months ago, I was talking to some CIOs about their business problems. During these conversations, I asked them about their interest in Big Data. One sophisticated CIO recoiled almost immediately saying that he believes most vendors are really having a problem discussing “Big Data” with customers like him. It would just be so much easier if you guys would talk to me about helping my company with our structured data and unstructured data. At the same time, Gartner has found that 64% of enterprises surveyed say they’re deploying or planning to deploy a Big Data project. The problem is that 56% of those surveyed by Gartner are still struggling to determine how to get value out of big data projects and 23% are struggling with the definition of what is Big Data and what is not Big Data.

DavenportClearly, this says the term does not work with market and industry participants. To me this raises a question about the continued efficacy of the term. And now, Thomas Davenport, the author of “Competing on Analytics”, has suggested that we retire the term all together. Tom says that in his research “nobody likes the term”. He claims in particular that executives yearn for a better way to communicate what they are doing with data and analytics.

Tom suggests in particular that “Big Data” has five significant flaws:

1)      Big is relative. What is big today will not be so large tomorrow. Will we have to tall call the future version Big Big Data?

2)      Big is only one aspect of what is distinctive about the data in big data. Like my CIO friend said it is not as much about the size of data as it is about the nature of the data. Tom says bigness demands more powerful services, but a lack of structure demands different approaches to process the data.

3)      Big data is defined as having volume, variety, and velocity. But what do you call data that has variety and velocity but the data set is not “big”.

4)      What do you call the opposite of big data? Is it small data? Nobody likes this term either.

5)      Too many people are using “big data” incorrectly to mean any use of analytics, reporting, or conventional business intelligence.

QuestionTom goes onto say, “I saw recently, over 80 percent of the executives surveyed thought the term was overstated, confusing, or misleading”. So Tom asks why don’t we just stop using it. In the end, Tom struggles with ceasing his use of the term because the world noticed the name Big Data unlike other technological terms. Tom has even written a book on the subject—“Big Data at Work”. The question I have is do we in the IT industry want to really lose all the attention. It feels great to be in the cool crowd. However, CIOs that I have talked to say they are really worried about what will happen if their teams oversell Big Data and do not deliver tangible business outcomes. The reality Tom says it would be more helpful than saying, we are cool and we are working on big data to instead say instead we’re extracting customer transaction data from our log files in order to help marketing understand the factors leading to customer attrition”. I tend to agree with this thought but I would like to hear what you think? Should we as an industry retire the term Big Data?

Related links

Related Blogs

Is Big Data Destined To Become Small And Vertical?
Big Data Why?
What is big data and why should your business care?

Author Twitter: @MylesSuer

Share
Posted in Data Governance | Tagged , , | 1 Comment

Good Corporate Governance Is Built Upon Good Information and Data Governance

Good Corporate Governance

Good Corporate Governance

As you may know, COSO provides the overarching enterprise framework for corporate governance. This includes operations, reporting, and compliance. A key objective for COSO is the holding of individuals accountable for their internal control responsibilities. The COSO process typically starts by accessing risks and developing sets of control activities to mitigate discovered risks.

On an ongoing basis, organizations then need as well to generate relevant, quality information to evaluate the functioning of established internal controls. And finally they need to select, develop, and perform ongoing evaluations to ascertain whether the internal controls are present and functioning appropriately. Having said all of this, the COSO framework will not be effective without first having established effective Information and Data Governance.

So you might be asking yourself as a corporate officer why should you care about this topic anyway. Isn’t this the job of the CIO or that new person, the CDO? The answer is no. Today’s enterprises are built upon data and analytics. The conundrum here is that “you can’t be analytical without data and you can’t be really good at analytics without really good data”. (Analytics at Work, Thomas Davenport, Harvard Business Review Press, page 23). What enterprises tell us they need is great data—data which is clean, safe, and increasingly connected. And yes, the CIO is going to make this happen for you, but they are not going to do this appropriately without the help of data stewards that you select from your business units. These stewards need to help the CIO or CDO determine what data matters to the enterprise. What data should be secured? And finally, they will determine what data, information, and knowledge will drive the business right to win on an ongoing basis.

So now that you know why your involvement matters, I need to share that this control activity is managed by a supporting standard to COSO, COBIT 5. To learn specifically about what COBIT 5 recommends for Information and Data Governance, please click and read an article from the latest COBIT Focus entitled “Using COBIT 5 to Deliver Information and Data Governance”.

Twitter: @MylesSuer

Share
Posted in CIO, Data Governance | Tagged , , | Leave a comment

Analytics Stories: A Case Study from Quintiles

Pharma CIOAs I have shared within other posts within this series, businesses are using analytics to improve their internal and external facing business processes and to strengthen their “right to win” within the markets that they operate. For pharmaceutical businesses, strengthening the right to win begins and ends with the drug product development lifecycle. I remember, for example, talking several years ago to the CFO of major pharmaceutical company and having him tell me the most important financial metrics for him had to do with reducing the time to market for a new drug and maximizing the period of patent protection. Clearly, the faster a pharmaceutical company gets a product to market, the faster it can begin to earning a return on its investment.

Fragmented data challenged analytical efforts

PharmaceuticalAt Quintiles, what the business needed was a system with the ability to optimize design, execution, quality, and management of clinical trials. Management’s goal was to dramatically shorten time to complete each trial, including quickly identifying when a trial should be terminated. At the same time, management wanted to continuously comply with regulatory scrutiny from Federal Drug Administration and use it to proactively monitor and manage notable trial events.

The problem was Quintiles data was fragmented across multiple systems and this delayed the ability to make business decisions. Like many organizations, Quintiles data was located in multiple incompatible legacy systems. This meant there was extensive manual data manipulation before data could become useful. As well, incompatible legacy systems impeded data integration and normalization, and prohibited a holistic view across all sources. Making matters worse, management felt that it lacked the ability to take corrective actions in a timely manner.

Infosario launched to manage Quintiles analytical challenges

PharmaceuticalTo address these challenges, Quintiles leadership launched the Infosario Clinical Data Management Platform to power its pharmaceutical product development process. Infosario breaks down the silos of information that have limited combining massive quantities of scientific and operational data collected during clinical development with tens of millions of real-world patient records and population data. This step empowered researchers and drug developers to unlock a holistic view of data. This improved decision-making, and ultimately increasing the probability of success at every step in a product’s lifecycle. Quintiles Chief Information Officer, Richard Thomas says, “The drug development process is predicated upon the availability of high quality data with which to collaborate and make informed decisions during the evolution of a product or treatment”.

What Quintiles has succeeded in doing with Infosario is the integration of data and processes associated with a drug’s lifecycle. This includes creating a data engine to collect, clean, and prepare data for analysis. The data is then combined with clinical research data and information from other sources to provide a set of predictive analytics. This of course is aimed at impacting business outcomes.

The Infosario solution consists of several core elements

At its core, Infosario provides the data integration and data quality capabilities for extracting and organizing clinical and operational data. The approach combines and harmonizes data from multiple heterogeneous sources into what is called the Infosario Data Factory repository. The end is to accelerate reporting. Infosario leverages data federation /virtualization technologies to acquire information from disparate sources in a timely manner without affecting the underlying foundational enterprise data warehouse. As well, it implements a rule-based, real-time intelligent monitoring and alerting to enable the business to tweak and enhance business processes as they are needed. A “monitoring and alerting layer” sits on top of the data, with the facility to rapidly provide intelligent alerts to appropriate stakeholders regarding trial-related issues and milestone events. Here are some more specifics on the components of the Infosario solution:

• Data Mastering provides the capability to link multi-domains of data. This enables enterprise information assets to be actively managed, with an integrated view of the hierarchies and relationships.

• Data Management provides the high performance, scalable data integration needed to support enterprise data warehouses and critical operational data stores.

• Data Services provides the ability to combine data from multiple heterogeneous data sources into a single virtualized view. This allows Infosario to utilize data services to accelerate delivery of needed information.

• Complex Event Processing manages the critical task of monitoring enterprise data quality events and delivering alerts to key stakeholders to take necessary action.

Parting Thoughts

According to Richard Thomas, “the drug development process rests on the high quality data being used to make informed decisions during the evolution of a product or treatment. Quintiles’ Infosario clinical data management platform gives researchers and drug developers with the knowledge needed to improve decision-making and ultimately increase the probability of success at every step in a product’s lifecycle.” This it enables enhanced data accuracy, timeliness, and completeness. On the business side, it has enables Quintiles to establish industry-leading information and insight. And this in turn has enables the ability to make faster, more informed decisions, and to take action based on insights. This importantly has led to a faster time to market and a lengthening of the period of patent protection.

Related links

Related Blogs

Analytics Stories: A Banking Case Study
Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform

Author Twitter: @MylesSuer

Share
Posted in CIO, Data Governance, Data Quality | Tagged , , , , | Leave a comment

What Should Come First: Business Processes or Analytics?

business processesAs more and more businesses become fully digitized, the instantiation of their business processes and business capabilities becomes based in software. And when businesses implement software, there are choices to be made that can impact whether these processes and capabilities become locked in time or establish themselves as a continuing basis for business differentiation.

Make sure you focus upon the business goals

business processesI want to suggest that whether the software instantiations of business process and business capabilities deliver business differentiation depends upon whether business goals and analytics are successfully embedded in a software implementation from the start. I learned this first hand several years ago. I was involved in helping a significant insurance company with their implementation of analytics software. Everyone in the management team was in favor of the analytics software purchase. However, the project lead wanted the analytics completed after an upgrade had occurred to their transactional processing software. Fortunately, the firm’s CIO had a very different perspective. This CIO understood that decisions regarding the transaction processing software implementation could determine whether critical metrics and KPIs could be measured. So instead of doing analytics as an afterthought, this CIO had the analytics done as a fore thought. In other words, he slowed down the transactional software implementation. He got his team to think first about the goals for the software implementation and the business goals for the enterprise. With these in hand, his team determined what metrics and KPIs were needed to measure success and improvement. They then required the transaction software development team to ensure that the software implemented the fields needed to measure the metrics and KPIs. In some cases, this was as simple as turning on a field or training users to enter a field as the transaction software went live.

Make the analytics part of everyday business decisions and business processes

Tom DavenportThe question is how common is this perspective because it really matters. Tom Davenport says that “if you really want to put analytics to work in an enterprise, you need to make them an integral part of everyday business decisions and business processes—the methods by which work gets done” (Analytics at Work, Thomas Davenport, Harvard Business Review Press, page 121). For many, this means turning their application development on its head like our insurance CIO. This means in particular that IT implementation teams should no longer be about just slamming in applications. They need to be more deliberate. They need to start by identifying the business problems that they want to get solved through the software instantiation of a business process. They need as well to start with how they want to improve process by the software rather than thinking about getting the analytics and data in as an afterthought.

Why does this matter so much? Davenport suggests that “embedding analytics into processes improves the ability of the organization to implement new insights. It eliminates gaps between insights, decisions, and actions” (Analytics at Work, Thomas Davenport, Harvard Business Review Press, page 121). Tom gives the example of a car rental company that embedded analytics into its reservation system and was able with the data provided to expunge long held shared beliefs. This change, however, resulted in a 2% increased fleet utilization and returned $19m to the company from just one location.

Look beyond the immediate decision to the business capability

Davenport also suggests as well that enterprises need look beyond their immediate task or decision and appreciate the whole business process or what happens upstream or downstream. This argues that analytics be focused on the enterprise capability system. Clearly, maximizing performance of the enterprise capability system requires an enterprise perspective upon analytics. As well, it should be noted that a systems perspective allows business leadership to appreciate how different parts of the business work together as a whole. Analytics, therefore, allow the business to determine how to drive better business outcomes for the entire enterprise.

At the same time, focusing upon the enterprise capabilities system in many cases will overtime lead a reengineering of overarching business processes and a revamping of their supporting information systems. This allows in turn the business to capitalize on the potential of business capability and analytics improvement. From my experience, most organizations need some time to see what a change in analytics performance means. This is why it can make sense to start by measuring baseline process performance before determining enhancements to the business process. Once completed, however, refinement to the enhanced process can be determined by continuously measuring processes performance data.

Related links

Related Blogs

Analytics Stories: A Banking Case Study
Analytics Stories: A Financial Services Case Study
Analytics Stories: A Healthcare Case Study
Who Owns Enterprise Analytics and Data?
Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR
Thomas Davenport Book “Competing On Analytics”

Solution Brief: The Intelligent Data Platform

Author Twitter: @MylesSuer

Share
Posted in B2B, B2B Data Exchange, Business Impact / Benefits, Business/IT Collaboration, Enterprise Data Management | Tagged , , , | Leave a comment