Artificial Intelligence CAN be Evil

June 16, 2018

Image Created by the MIT Nightmare Machine

 

     Seldom does literature or movies depict computers in a benevolent light.  Exceptions include the helpful omnipresent computer onboard the starship Enterprise in the television series Star Trek or the benevolent Central Computer in the Eight Worlds series by John Varley.  Most often, advanced computers get depicted as evil and anti-human such as Skynet in the Terminator series or Hal in 2001: A Space Odyssey.  Recently, researchers to MIT’s Media Lab, a hotbed of research in artificial intelligence, set out to demonstrate that artificial intelligence by itself does not contain evil. Rather, artificial intelligence can become evil based on the type of information fed to it.  In a project named “Norman” after the main character in the movie Psycho, Dr. Iyad Rahwan, an Associate Professor at the Media Lab who holds a Ph.D. the University of Melbourne in Australia, along with other researchers fed their AI disturbing images of death from Reddit.  Reddit is a news aggregation site described by Digital Trends as “the comments section for every corner of pop culture.”  The vast content of Reddit contains everything from the comical to macabre.  The makers of Norman used a dark recess of Reddit that contains gruesome deaths to train Norman, and make it respond to images as a psychopath.  To test the system, Rahwan exposed Norman to a Rorschach Test.  The Rorschach test named after the Swiss psychologist, Hermann Rorschach, uses a series of abstract inkblots shown to a patient.  The therapist asks the patient to describe what they see in the black and white or colored images.  The responses help the therapist to determine patient’s state of mind and reveals underlying issues such as neurological disorders.  Using inkblots, the researchers compared Norman’s interpretations of the images to another artificial intelligence trained on more benevolent imagery.  In an article titled, “Researchers at MIT create ‘psychopathic’ AI,” Ned Dymoke describes Norman described versus the regular AI observed.  In one black and red inkblot, the normal AI described seeing a vase with flowers while Norman saw a man shot in the head—a much grimmer description.   In another case, the normal AI saw “a close-up of a wedding cake on a table,” but Norman saw “a man killed by speeding driver.”  Training Norman on disturbing data made Norman answer questions more like a psychopath.

 

     In another similarly dark vein, Rahwan and other researchers at the Media Lab developed another AI system called Nightmare Machine, which can be found online at nightmare.mit.edu.  In the long tradition of horror stories from Frankenstein to Poltergeist, the creators of Nightmare machine trained their AI on scary scenes from pictures and film. Then they put pictures of well known places and human faces into the system and then let the AI make the images look creepy.  To keep the system learning, Rahwan asks the public to rank which pictures they find scary.  The two projects called Haunted Places and Haunted faces generate some strange and ghoulish scenes.  I mainly found some of the faces to be creepy.  

 

      Many science fiction writers and movie makers depict artificial intelligence as an evil controlling force.  The work of Rahwan and others at the MIT Media lab set out to demonstrate that AI can be good or evil depending on the type of information from which it learns.  Feeding an algorithm with benign data produces benign results, and feeding it horrific data produces a psychopathic AI in the case of Norman or scary images in the case of Nightmare Machine.  Humans should carefully monitor how machines get trained and by what data to push for more good artificial intelligence.  Recently on alphr.com, Vaughn Highfield describes a breakthrough at Google.  The researchers at Google allowed one AI to build another AI, in other words, an AI child.  The child called NASNet, optimized by the parent AI for image recognition scored higher than any other previous systems.  Since AI can now build AI, we should more than ever be careful about what data store and allow AI systems to access.

 

 

 

Dr. Smith’s career in scientific and information research spans the areas of bioinformatics, artificial intelligence, toxicology, and chemistry. He has published a number of peer-reviewed scientific papers. He has worked over the past seventeen years developing advanced analytics, machine learning, and knowledge management tools to enable research and support high level decision making. Tim completed his Ph.D. in Toxicology at Cornell University and a Bachelor of Science in chemistry from the University of Washington. 

 

You can buy his book on Amazon in paperback here and in kindle format here.

 

 

 

Share on Facebook
Share on Twitter
Please reload