Now That Robots Can Learn, How Will We Teach Them Right from Wrong?

July 22, 2017

Photo Source: Wikipedia Commons

 

 

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The Three Laws of Robotics

“Runaround”, 1942

By Isaac Asimov

 

 

       The field of robotics driven by an astonishing number of engineering breakthroughs has produced remarkable humanoid robots that can walk on two feet, perceive their world through sight, sound, and touch and even interact with people at a social level.  Honda’s ASIMO, the world’s most famous humanoid robot, first introduced to the public in 2000, walks, greets people, responds to questions in multiple languages, and even conducted the Detroit Symphony Orchestra in 2008.  More advanced robots continue to emerge in part due to advances in engineering to make the robots lighter and stronger.  In addition to the engineering marvels that give us robots of all shapes and sizes, artificial intelligence has monumentally contributed to the development of more lifelike and complex robots than ever before.  In fact, some robots are interactive enough that researchers already must consider how to teach robots right from wrong.

 

        Isaac Asimov famously proscribed his Three Rules for Robotics in a short story called “Runaround”.  In many ways, these rules should apply to robots being developed today.   In a nicely made video linked below, work at Tufts University’s Human-Robot Interaction Lab by researchers Gordon Briggs and Mathias Scheutz demonstrates how robots can be programmed to evaluate commands from humans. Their system helps robots deny commands that would harm the robot or would harm something or someone else.  Briggs & Scheutz help the robot evaluate the potential results of any action it is commanded to perform. The video shows an impressive interactivity between the robot and the researcher.  Most impressively, the robot denies in the video the command to walk off the table, but it accepts the command when the human assures the robot that he will catch it when it falls.

 

       Not all questions of right and wrong are clear cut, and robots will be asked to make more and more nuanced decisions as they become more sophisticated and independent.  Given the ability of artificial intelligence to learn through experience, Mark Riedl Brent Harrison, researchers at the Georgia Institute of Technology, developed a system called Quixote that takes a different approach to teaching robots how to make the right moral choice when confronted with a situation.  Using the observation that human children learn right from wrong from stories and from being punished for making the wrong choice, Quixote reads children’s literature and also has a system of reward and punishment to learn from the decisions it makes. 

 

         The rise of robots and artificial intelligence continues to produce more sophisticated humanoid robots that can make their own decisions and act independently of human operators.   As robots will be trusted with more autonomy, researchers are developing ways that robots can make good decisions that respect Asimov’s Three Laws of Robotics and are consistent with human morals and ethics.  One group at Tufts University developed a set of evaluations that a robot must make before acting on a command.  Another group at the Georgia Institute of Technology have built a system that helps robots learn right from wrong by reading children’s stories.  With robots edging closer to being truly conscious machines, we need to find ways that the machines help and support society and not become a liability to the world.

 

 

 

 

 

Share on Facebook
Share on Twitter
Please reload