Getting Ready for Artificial Intelligence’s Point of No Return

Artificial IntelligenceLast month, it was revealed that some students who thought they were communicating with their professor’s teaching assistant were actually talking with an artificial intelligence (AI) system.

“Ahshok Goel, a professor at Georgia Institute of Technology has just revealed that he has been employing a robot as one of his teaching assistants. “Jill Watson” has been doing regular TA work for Goel, answering students questions in a forum, reminding students of upcoming important dates over email—and all of this in a way that was so human, students never realized that they were talking to a robot.”

 

Surprise! Georgia Tech Teaching Assistant Isn’t Human, She’s a Robot

A lot of data was involved as well–more than 40,000 postings in a discussion forum, tied to previous responses to reply to related questions.

With this experiment, which blew the doors off the Turing test, we’re at the point of no return–when AI interactions are indistinguishable from humans. Perhaps it’s time to consider the implications.

Members of the IEEE have taken up this challenge, developing and issuing ethical guidelines for AI and the actions that need to be taken as AI becomes a bigger part of business and society.

The IEEE document covers a wide range of scenarios, and here are some nuggets:

How can we assure that artificial intelligence is accountable? “Based on the cultural context, application and use of artificial intelligence, people and institutions need clarity around the manufacture of these systems to avoid potential harm. Additionally, manufacturers of these systems must be able to provide programmatic level accountability proving why a system operates in certain ways to address legal issues of culpability, and to avoid confusion or fear within the general public.”

How can we ensure that artificial intelligence is transparent? “A transparent artificial intelligence is one in which it is possible to discover how and why the system made a particular decision, or in the case of a robot, acted the way it did. Artificial intelligence will be performing tasks that are far more complex and impactful than prior generations of technology, particularly with systems that interact with the physical world, thus raising the potential level of harm that such a system could cause.”

How can we extend the benefits and minimize the risks of artificial intelligence being misused? “Such risks might include hacking, ‘gaming,’ or exploitation (e.g., of vulnerable users by unscrupulous manufacturers). Raise public awareness around the issues of potential artificial intelligence misuse in an informed and measured way by providing ethics education and security awareness that sensitizes society to the potential risks of misuse of artificial intelligence.”

Computers and robots already instantiate values in their choices and actions, but these values are programmed or designed by the engineers that build the systems. “Increasingly, autonomous systems will encounter situations that their designers cannot anticipate, and will require algorithmic procedures to select the better of two or more possible courses of action. Some of the existing experimental approaches to building moral machines are top-down. In this sense the norms, rules, principles, or procedures are used by the system to evaluate the acceptability of differing courses of action or as moral standards or goals to be realized.”

It is not clear how agreed-upon AI norms should be built into a computational architecture. “This emerging field of research goes under many names including: machine morality, machine ethics, moral machines, value alignment, computational ethics, artificial morality, safe AI and friendly AI. Recent breakthroughs in machine learning and perception will enable researchers to explore bottom-up approaches—in which the AI system learns about its context and about human values—similar to the manner in which a child slowly learns which forms of behavior are safe and acceptable.”

Lack of values-aware leadership. “Technology leaders give innovation teams and engineers too little or no direction on what human values should be respected in the design of a system. The increased importance of artificial intelligence systems in all aspects of our wired societies further accelerates the needs for value-aware leadership in artificial intelligence development.”

Lack of ownership or responsibility from tech community. “The current makeup of most organizations has clear delineations between engineering, legal and marketing arenas. Technologists feel responsible for safety issues regarding their work, but often refer larger social issues to other areas of their organization. Adherence to professional ethics is influenced by corporate values and may reflect management and corporate culture. Multidisciplinary ethics committees in engineering sciences should be generalized, and standards should be defined for how these committees operate, starting at a national level, then moving to international standards.”

Achieving a correct level of trust between humans and artificial intelligence systems. “Development of autonomous systems that are worthy of our trust is challenged due to the current lack of transparency and verifiability regarding these systems for users. A first level of transparency relates to the information conveyed to the user while an autonomous system interacts with the user. A second level has to do with the possibility to evaluate the system as a whole by a third party (e.g., regulators, society at large and post-accident investigators). Transparency and verifiability are necessary for building trust in artificial intelligence. We recommend that artificial intelligence come equipped with a module assuring some level of transparency and verifiability.”

Comments

  • Shannon Parker

    There should be full up-front disclosure when something presenting as human is actually artificial intelligence.