Opinion

How can we control intelligent systems no one fully understands?

Quote icon
...it is very difficult to control adaptive learning machines in complex environments.
By Vasant Dhar
The widespread conversations about AI took a new turn in March 2016 when Microsoft launched, then quickly unplugged, Tay, its artificial intelligence chat robot. Within 24 hours, interactions on Twitter had turned the bot, modeled after a teenage girl, into a “Hitler-loving sex robot.”

This controversy, on the heels of the Feb. 14, 2016, accident of Google’s self-driving robot car, has ignited a new debate over artificial intelligence. How should we design intelligent learning machines that minimize undesirable behavior?

While both of the aforementioned incidents were relatively minor, they highlight a broader concern, namely, that it is very difficult to control adaptive learning machines in complex environments. The famous cybernetician Norbert Wiener warned us about this dilemma more than 50 years ago: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose we really desire and not merely a colorful imitation of it.”

Read the full article as published in TechCrunch

___
Vasant Dhar is a Professor of Information Systems.