Stopping Evil Robots in a Nutshell

August 30, 2017

Photo Source: Wikimedia Commons

 

     In 2015, an open letter was presented at the Future of Life Institute (FLI).  The FLI is an Illuminati-like, science club with brain stars like Stephen Hawking and Elon Musk on its board of directors.  As it often happens when very brainy and wealthy people form a club, they then form a mission that not only includes them but also the entirety of humanity.  It’s important to note too that this open letter has been signed by 8,000 to date, with, of course, a full list of super science and industry stars posted on the FLI website.  You can also sign this letter on their website—though no guarantees that you too will be posted on the list.  Before we get into the open letter and I give you the breeziest wrap-up ever on it (the letter is 12 pages), I would like to give you verbatim the FLI mission statement: “To catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges” (futureoflife.org).

 

Research Priorities for Robust and Beneficial Artificial Intelligence

 

       The above heading was the title for the open letter.  The authors were officially Stuart Russell, Daniel Dewey, and Max Tegmark; however, the contents of the letter were derived from input and discussion from several leading experts and scholars from both academia and industry.  Additionally, legal experts, security experts, ethicists, economists, philosophers, psychologists and really the whole softer social science world were queried alongside the engineers and mathematicians.  The primary goal was to come up with a list of research priorities to help usher in AI without possibly instigating a world-ending apocalypse (or at the very least turning life on earth into a depressing, meaningless trip from point A to point B).  Below, will be a nearly faithful run down excepting I do not separate short-term and long-term research priorities, and I collapse the Verification and Validity sections, as they are both listed as short and long-term priorities.  

 

Economic Impact

 

       The truth is that AI is coming, and it will increasingly run a large part of our society, and this will shake things up.  I believe in earnest that the drafters of this letter and FLI does care and are worried, but it’s hard to deal with sentences like this: “History proves many examples of subpopulations not needing to work for economic security, ranging from aristocrats in antiquity to many present-day citizens of Qatar.” They envision using all the gobs of money we will make from automating everything to pay nonworking people's salaries. They admit that AI will create mass unemployment.  And they also cede that people feel isolated and depressed when unemployed.  However, they suggested that “unemployment is not the same as leisure.” Yes, perhaps, but most humans thrive on having a purpose.  Humans also do not build self-esteem unless they actually accomplish things. The other key point in this section is that we need to really show how much money can be made from AI in the very near future, so governments, people, and companies will see it as a very good thing for society and invest a lot of money in AI research.  Interesting move. While we all turn into amorphous, leisure blobs, everyone over at FLI will be very busy bees with tons of funding.

 

Law and Ethics Research

 

      This section was pretty basic.  We need to find out who to blame when a rogue robot or AI program which operates completely independently from humans does something very bad.  Secondly, we need to figure out how to make our future, fully automated legal system not horrible.  Human judges see the law as flexible and dependent upon precedent and the surrounding details of the specific case.  They are not sure how a robot’s inner guide, or moral compass, or decision maker will compare to that of a human judge.  Oh, and there is a spooky paragraph in this section on “Can lethal autonomous weapons be made to comply with humanitarian law?”  They are not so sure, and they admit these weapons could accidentally start wars or be used by terrorists. And lastly, they give lip service to privacy.

 

Computer Science: Verification and Validity

 

      They spend a lot of time on this.  Essentially, how can you tell that you have built your autonomous weapon correctly, and more importantly, how can you know if your well-built autonomous weapon is going to behave correctly given that there is no way of knowing this in a static setting? 

 

Security

 

     Main points: once we build autonomous machines, will we be able to control them? Will they be hacked and used against us? Will the machines themselves turn on us?  Or, will the good outweigh the bad—with AI machines being able to detect and defend us from hackers and other outside threats?

 

Control

 

        Control is the final subject heading in the open letter.  However, the issue of control is threaded throughout the piece in every aspect from economics to law to security.  Will we be able to control autonomous intelligent machines once they have been let loose (so to speak)?  In the letter’s conclusion is this sentence: “It is the duty of AI researchers to ensure that the future impact is beneficial.”  And therein lies my problem with this letter and to some extent FLI: while yes, it is almost always good when intelligent people gather and create mastermind groups, real money and real industry, academic, and government attention is being given to this letter and this group—lending them an extraordinary amount of power.  I believe it is folly for them and folly for us to believe that it is the duty of AI researchers to ensure that the future impact is beneficial.  It is our duty.  

 

If you are interested, the letter in its entirety can be read here.

 

Post Script: After the letter was presented in 2015, FLI formed a yearly conference based on the contents of the letter called Beneficial AI. Below is a video from the 2017 conference.  Warning: video is dull and difficult to follow.  However, if you can plow through it, you can gain some insights into how scientists are thinking about AI. 

 

 

 

Share on Facebook
Share on Twitter
Please reload