Ready for Artificial Intelligence? (Part 2)

Read: Ready for Artificial Intelligence? (Part 1)

We, The People

Artificial IntelligenceI’ll bet you can quote 10 different movies, where people have to fight for the fate of humanity against machines. My all-time favorite is 2001: A Space Odyssey, with HAL’s famous quote (maybe because it reminds me of how my mother would talk to me during my adolescent years) “I am, by any practical definition of the words, foolproof and incapable of error.”[i]

Reality and fiction are not the same, we are very far from having a super-intelligent computer that can take over humanity, but people are still hesitant to adopt AI technology. In this blog, I will explore the reasons, and identify the methodology we at Informatica use to address these concerns.

There are no ethics and standards for this evolving technology

We have the zeroth laws of Asimov: zeroth law, to precede the others: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[ii] Three rules for robots. Humans, on the other hands, have accumulated thousands of laws over the course of history. The 114th United States Congress, in its two years tenure, had enacted 329 laws (which is about the average for a congress tenure for the past 20 years)[iii] Maybe AI doesn’t need all these laws, but are the three fictional laws of Asimov enough? People are rightfully concerned about the social significance of the way mathematics is being used[iv]. Math is an objective tool, but data may not be, the people who write the mathematical equations may not be, and the law hasn’t caught up with the wide use of mathematics.

Artificial Intelligence is goal-oriented, almost to a fault

“It was the logical choice. It calculated that I had a 45% chance of survival. Sarah only had an 11% chance. That was somebody’s baby. 11% is more than enough. A human being would’ve known that.”[v]

In the movie “I, Robot”, Will Smith explains that a human would have taken a decision that may not be logical, but is morally right. Humans can bring into a decision considerations and contexts that a machine cannot. While the machine is focused on the task, and is solely dependent on the human to train it on the task, humans are adaptive and are able to adjust their decisions based on new and unknown information.

Dependency on Machines

Remember the film WarGames?  It’s about a tech-whiz teenager who unwittingly hacks into the computer of the North American Aerospace Defense Command (NORAD) and nearly sets off World War III.  Did you know it is credited with leading to the enactment 18 months later of NSDD-145, the first Presidential Directive on computer security?[vi]

We have reached a point where people are no longer able to make it in society without the use of technology, and people tend to express the highest level of fear for things they’re dependent on but have no control over[vii]. That’s almost a perfect definition of artificial intelligence.

We Shall Overcome

Informatica has built AI into the core of its integrated data management suite, the Intelligent Data Platform. Our AI service is called the CLAIRETM engine and it provides AI and machine learning-based intelligence to all of the products in the platform.

CLAIRE is built with these concerns in mind, it follows leading UX practices with user research, focus groups and a UX expert assigned to the different CLAIRE development teams.

We used these practices and expertise to incorporate into CLAIRE modular architecture practices that helps users overcome the concerns from AI technology.

For Example:

Transparency – at all times, users can know why and what decisions the CLAIRE module made. For example, when a CLAIRE module determines that a certain field is a time, it will display to the user other data in the field and visibly mark the confidence level the algorithm “believes” this is a time field. The same is done for all other entities extract in the file – name, address, URLs, IPs, product information etc.

 

Control – At any time user can activate the CLAIRE engine or de-activate its capabilities with a simple click of a button.

 

These are just two examples of how CLAIRE takes into consideration barriers for adoption new technologies. Reach us here for more information on CLAIRE.

 

And just for a little fun check out this video 

 

[i] http://www.imdb.com/character/ch0002900/quotes 

[ii] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

[iii] https://www.govtrack.us/congress/bills/statistics

[iv] https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815 

[v] http://www.imdb.com/title/tt0343818/quotes

[vi] https://www.nytimes.com/2016/02/21/movies/wargames-and-cybersecuritys-debt-to-a-hollywood-hack.html

[vii] https://www.theatlantic.com/technology/archive/2015/10/americans-are-more-afraid-of-robots-than-death/410929/

Comments