Artificial Intelligence Runs on Data, and More is Better

Artificial IntelligenceThe artificial intelligence revolution is gaining critical mass within organizations. A new survey of 835 executives across the globe by Tata Consultancy Services (TCS) finds some 84% are using some amount of AI technology today in their businesses. Areas being touched – or soon to be touched — by AI include IT, sales, marketing, customer service, finance, strategic planning, corporate development, and HR.

Before executives begin to bet their businesses on AI, however, they’re going to have to make sure that the applications have been fed all the data they need. This may take time, as AI-based applications may require time to “learn” until they are fully proficient. One may argue that machines are very different from people, but they both need to learn from experience, provided with continuous streams of data. Humans have been learning new things for thousands of years, but machines have only started to develop the capacity to learn in the past couple of years, through artificial intelligence and machine learning.

In a recent Harvard Business Review article, Ajay Agrawal, Joshua Gans and Avi Goldfarb, all with the University of Toronto, explored the developing AI phenomenon, and the challenge organizations face as they move forward with it. As is the case with humans, learning is more critical for some systems than others – say a fast-food worker versus an airline pilot. Some learning is “good enough,” while other learning needs to be precise.

That ‘good enough’ principle applies to artificial intelligence as well. That’s where things get risky for enterprises. An AI-enabled application may need to do some “learning” before it is ready to assume heavy lifting without human intervention. Agrawal and his co-authors point to autonomous cars as a case in point – there is life-and-death risk in putting these systems out on the road, but this is the only way the systems can learn and improve responses to various situations.

On a corporate level, it’s equivalent to trusting financial systems to AI, even though applications are rudimentary and still require a learning process – input from data of real-world situations. “Machines learn faster with more data, and more data is generated when machines are deployed in the wild,” Agrawal and his co-authors observe. “However, bad things can happen in the wild and harm the company brand. Putting products in the wild earlier accelerates learning but risks harming the brand (and perhaps the customer!); putting products in the wild later slows learning but allows for more time to improve the product in-house and protect the brand (and, again, perhaps the customer). As more companies seek to take advantage of machine learning, this is a trade-off more and more will have to make.”

Tolerance for error is a key consideration. An email filtering application doesn’t need exacting standards; but autonomous driving does. If there is an application with a low tolerance level for error, which may mean a considerable amount of data needs to be directed at the application as part of a continuous improvement feedback loop.

Organizations also need to consider how important it is to capture user data in the wild. There are countless inputs that may shape the decisions an AI application makes, so enterprises need to sort through what is relevant and impactful and what is just a mountain of data.

Comments