The Deepening Mystery of Artificial Intelligence Decision-Making

Artificial Intelligence
The Deepening Mystery of Artificial Intelligence Decision-Making

More and more parts of our business engagements are being driven by algorithms, and lately, there’s been a surprising revelation about these algorithms. Even the most astute data scientists among us really don’t know how many algorithms are generating the results they are issuing.

This deepening mystery of Artificial Intelligence Decision-Making was recently explored by Will Knight, senior editor for AI at MIT Technology Review, who started off by pointing out the artificial intelligence powering a recently developed self-driving car. “The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions.”

Deep learning technology– which is growing ubiquitous and has enormous potential– is making decisions that are often inscrutable to developers and end-users alike. “It will be hard to predict when failures might occur– and it’s inevitable they will,” Knight warns. The implications are far-reaching, as AI is now employed in a range of activities, from medicine to customer relations to military planning.

“There is no easy way to untangle the mystery of AI decision making”, Knight continues. One proposal is to add a step to the process– “the system extracts and highlights snippets of text that are representative of a pattern it has discovered.” He cites a project at the Defense Advanced Research Projects Agency, called the Explainable Artificial Intelligence program, which provides explanations of decisions issued. The challenge is that these explanations may be too brief and incomplete.

At the root of these challenges is the trust issue. Just as decision-makers need to be able to trust the data that is feeding their systems, as well as the algorithms that provide actionable insights from that data. The need for critical thinking is not going to go away anytime soon, no matter how sophisticated our AI systems become. Trust is the ultimate hurdle that AI and machine learning will need to overcome.

Establishing trust in AI was recently addressed in a report of IBM’s Dr. Guruduth Banavar, who advocates guidelines and governance over AI implementations. He also supports greater transparency in AI solutions– delivered through additional programming steps such as DARPA’s Explainable Artificial Intelligence program. “Trust is built upon accountability. As such, the algorithms that underpin AI systems need to be as transparent, or at least interpretable, as possible. In other words, they need to be able to explain their behavior in terms that humans can understand— from how they interpreted their input to why they recommended a particular output. To do this, we recommend all AI systems should include explanation-based collateral systems.”

Comments