Machine Learning: Why Should You Care?
Learning, at the heart of it, is what has helped advance our civilization in all arenas. We learnt what worked and what didn’t and applied those lessons in ways that brought out both incremental improvements and disruptive innovations. Human learning has been the center-piece, the guiding beacon, if you would, in training our minds how to think smarter and act wiser: one step at a time. Machine learning really stems from that philosophy that is grounded in learning on the go, guided by data. Distinction here is that application of lessons learnt or insights captured from past behavior to improve future performance is done without human intervention. Development of an algorithm oriented or mathematical formula based approach lends itself to fine tuning when it comes to machines learning. The bias is clearly towards experimentation before instrumentation. Expected benefits could include self-improvement and self-healing techniques and principles, suggested by data, in a guided fashion.
Why Should It Be Now?
In ‘The Attacker’s Advantage: Turning Uncertainty into Breakthrough Opportunities’, renowned business guru and bestselling author Ram Charan argued that digitization and integration of technologies through software and hardware have significantly impacted many businesses. But, much more is to come. He went on suggesting that specific skills would be required to spot the disruption on their way and take advantage of those by taking appropriate next steps. Although not entirely comfortable with it yet, we have grudgingly begun to recognize uncertainty as being ubiquitous – in the ways we consume products and services, perform transactions, expect interactions, collaborate with each other, and make decisions. Structural changes along many dimensions are moving rapidly and nature of interplay among them is at best unknown.
Humans, with their limited processing capability, in the face of such a powerful combination of forces of change, will struggle to meet business demands at the pace expected. The bedrock of models leveraged by human experts to help predict outcomes comes from innovations in the field of statistics dating back to 19th and 20th centuries. That was a time when data was limited. Modern world presents an entirely different scenario with petabytes and exbaytes of data moving at a very high speed. This is where machine learning could provide a helping hand to our mission: predict customer churn, identify next segment to launch a product for, or create a drug in preparation of an anticipated outbreak. Machine learning algorithms are inherently not constricted by any preset assumption of statistical models of past centuries when size of data sets was much smaller.
How Could It Actually Help?
In a McKinsey report, titled, “An Executive Guide to Machine Learning” it is stated that European banks have built micro-targeted models to accurately forecast service cancellation or default on their loans, and then determine how best to intervene. Report goes on to state that more than a dozen banks have already replaced older statistical modeling approaches with machine learning techniques. Most of them have seen improvements between 10 and 20 percent in new sales or reduced churns.
Genetic algorithms and neural networks were early path-setters of machine learning technology with advanced heuristic based search algorithms. Heuristic, or best guess based search technique, essentially evaluates all available paths one could take from a specific point and suggests the option that has the highest likelihood of success. However, such an approach focuses on finding an acceptable solution, where an optimal or perfect solution is either impossible or improbable given complexity of the environment and data sets. Speed is of the essence in finding such a solution or determining immediate next step in the overall path. When we encounter big data realm, we come across similar complexity challenges. Cutting through the noise to find the signal within a reasonable amount of time and cost could drive our efforts more than finding the perfect solution at a great cost. Even the concept of ‘reasonable time’ varies based on use cases; but no matter what, demand for detecting insights faster is increasing with every passing day.
Prerequisite: Let’s Visit Data Quality, Again!
In order to have a successful machine learning strategy and program deployed, an organization needs to develop a robust algorithmic and/or mathematical approach for the best heuristic type capability. This in turn requires availability of a huge set of data that is useful and reliable to train the algorithm in the first place. Without such reliable data in huge supply, the learning capability will be limited to the vagaries of data velocity. Development and implementation of a trusted machine learning capability will require an end to end data management strategy. Such an approach will ensure access to troves of data from all sources and sizes and then make it useful in a clean, safe, and connected fashion. Aided with this capability, an organization will go on leveraging data from describing a story to predicting how the story might unfold to finally prescribing what to do under different scenarios.
For all of the high end capability that data could help with, one thing is for sure, that without clean and trusted data, this journey won’t be fruitful. Implementing an exotic data mining algorithm or connecting to newer data sources would only provide marginal benefits as compared with what can be made useful with reliable data. Machine learning technique when paired with such data management hygiene and quality of available data, will truly yield the competitive benefits to an organization – spotting the opportunities to disrupt and seizing on them. Is your organization ready for machine learning? Do you have any current plans?