Does Cost Limit the Use and Relevance of Analytics in Business Decision Making?
In 1865, William Jevons stumbled upon an economic paradox. As it became more efficient to use coal through the advent of new technology, coal consumption rose rather than fell. On the face of it, this result was counter-intuitive. As an economist, Jevons expected that better efficiency would in fact lead to a decrease in the use of coal. However, as the cost of coal derived power got cheaper, the market for coal expanded. This phenomenon became appropriately named the “Jevons Paradox.”
How the Jevons Paradox applies to business intelligence?
To many respects, business analytics today are in a similar position to coal in 1865. Historically, it has been very expensive to create business analytics. For this reason, three interesting phenomenon occurred. First is that analytics has been rationed to the most important groups within the enterprise. Second analytics has become limited to topics where there is “a priori” knowledge of a relationship between variables for which data is collected. And finally, the data collected and thereby the metrics and key performance indicators that were enabled typically were easy to collect. Typically, this meant that the data used in analytics was determined by the design of business application sources instead of business questions driving data requirements and thereby business applications implementation. This created a business intelligence world where there was very little data or knowledge discovery. Business Intelligence practitioners could only answer the business questions that business users knew to ask and for which the application developer had chosen to enable in their application design. This lead to business intelligence being much more deductive versus inductive about business processes and knowledge discovery.
A rapidly changing business landscape demands change
In the The Power of Asking Pivotal Questions, Schoemaker and Krupp effectively challenge the above way of thinking. They say that “in a rapidly changing business landscape, executives need the ability to quickly spot new opportunities and hidden risks”. “Good strategic thinking and decision making often require a shift in perspective…What worked in the past simply may not apply to the future”. How can we answer what they describe without changing the way that we create analytical solutions? At the same time, how do we ensure our analytical processes do not become limited by the design of our transactional systems? I believe fixing this requires us to turn application development on its head. We need analytical thinking to drive application development rather than the other way around. In other words, enterprises need to ask their business questions before they build their transactional systems. We cannot answer today’s business questions where data is only an afterthought. As well, where possible, we need to ensure that unstructured and semi structured data is collected with potential business interest and relevance.
Lowering the cost of business intelligence requires that we change how it is done
At the same time, our data evaluation and analysis needs to move to more of an empirical, a posteriori approach. Here we need to collect any potentially meaningful data—structured, semi-structured, and unstructured. And we need to fundamentally change how business intelligence is created. I met with an enterprise architect a few weeks back. He said that they spend significant effort and dollars building integrations that the business ends up rejecting as not meeting their needs. The opportunity he perceived is to give the business user the power in business intelligence.
To do this, we need to place potentially relevant data into the so called Data Lake before investing in the data significantly. We need to then enable end users to manage the process of creating business intelligence. At the same time, instead of building analytics on a priori premises, we need to let the data speak to the user and then with this knowledge allow them to determine what analytics make business sense. This is why the move to Hadoop based analytical solutions is only the first step.
Putting everything together
So what changes? As is shown in the diagram below, the opportunity in front of us is to ingest the overarching data set into a Data Lake that is a mixture of a Hadoop cluster plus traditional data warehousing approaches. It is wise to make sure that applications and other data sources collect potentially relevant data that respond to the business questions derived from business strategy. And where needed transactional and other inputs should be optimized to answering relevant business questions. But this should be the limit of data investment at this point.
Once the above has occurred, we need to go after the rest of the cost of business intelligence implementations. We can do this by making business intelligence really intelligent. Historically, business intelligence has been about business users telling their developers what they needed and then the developers evaluating, correcting, and connecting data and then building required data extraction and cleansing flows.
As described above, poor communication does happen, but this is a lot bigger than this. By creating metadata on the fly and automating the data profiling processing, much of what took days and months can be done in minutes and hours. Put simply, the messy work of business intelligence can be eliminated. The above two steps allow business users to evaluate and then connect data sets, discovering the unexpected data relationships that we have been discussing. At the same time, it can recommend ways to fix data issues and business users can make these changes happen directly. In other words, business users can see where the data gaps are and with the aid of a recommendation engine have data cleansed, improved, and enriched before it is shared. Finally by enabling the business user to pull in internal and external sources directly and have data relationships presented, they can directly understanding how data relates. This means that business users get the data they need to make better decisions the first. And when the business user is happy with the data, then the system allows IT to directly harvest what the business user has created. This dramatically reduces the time for IT in producing the data in production. This fundamental changes the productivity of all involved within the business intelligence value chain.
Once relationships are discovered and automation creates initial data flows, further cleansing, integration, and consolidation of data can occur before the presentation layer of software is created. So what is different is that we collect data in a low cost Hadoop cluster and Data Lake and create data organization and the presentation in an a posteriori fashion.
Using new approaches, we have an opportunity to take a new approach to analytics. Instead of being limited by data models that need to be determined pre data we can change how much of the work of business intelligence is done. This creates greater efficiency between business users and business intelligence developers. This fact maximizes the value that can be provided for the data driven business user. And all of this lowers the cost of making an informed a decision because it is cheaper to collect data and the uplift on data happens after the data is collected and the data value has been determined. The question is will this drive up the number of analytics used per business as the marginal cost of analytics is dramatically reduced. And more importantly, will this create a new meaning for the Jevons Paradox.
Blogs and Articles