Some time ago, a friend of mine told me about a website called Edge.org, saying that on this site you have the world’s greatest minds duking it out over the most pressing questions of the day. Of course, I had to check it out, and I was subsequently blown away. The site, as promised, takes us to the edge of human knowledge by engaging great thinkers in conversations, featuring them in videos and posing an annual question such as for 2017, “What Scientific Term or Concept Ought to be More Widely Known?” or the 2013 question “What ‘Should’ We Be Worried About?” For the annual question, responses from all over the world are organized on the website with a list of contributing authors and links to all printable/sharable responses. The Edge question each year is then printed in book form. My favorite recent question comes from 2015: “What Do You Think About Machines That Think?” The 2015 question garnered 186 responses expressed (according to Edge.org) in 131,500 words. I looked through the responses and was struck by the variety of opinions. There certainly are many ways to organize these responses, and for this post, I chose to group the responses into two categories—cautionary or dismissive. Such varying opinions challenged me to examine where I fall in the continuum of opinions about the machines that think.
Some of the responses fell into the cautionary category. Most of these responses were resigned to the inevitable. The astrophysicist and Novel Prize winner John C. Mather, cautioned in clear terms that machines are evolving and are subject to the same Darwinian pressures as any organism. He says, “So far we have found no law of nature forbidding true general artificial intelligence, so I think it will happen, and fairly soon…” (Edge.org). Michael Vassar, the co-founder of MetaMed Research pondered that a super intelligent machine could lead to our extinction. He cautions too that leaving decision making to machines for the public good may lead to an authoritarianism that runs against what people think of as happiness. According to some, the world of intelligent machines and even human machine hybrids will be upon us to overtake us; how soon remains the only debate.
Other respondents to the question: “What Do You Think About Machines That Think?” were skeptical of the likelihood of the rise of intelligent machines that could give the human race a run for it money and even make us extinct. Satyajit Das, the author of Age of Stagnation, showed little confidence that people can possibly build thinking machines. He writes, “The human species is simply too small, insignificant and inadequate to fully succeed in anything that we think we can do” (Edge.org). The physicist, Freeman Dyson, flatly states that a thinking machine in the near future is not likely. The Professor of Psychology at Princeton University, Eldar Shafir, questions how a machine, if it cannot experience the range of emotions from love to fear, can be truly a thinking machine. He uses references to the decisions people must make when there are no good choices. True thinking, he asserts, occurs when having to make the choice between two bad outcomes, and that true thinking requires said emotion. Professor of Philosophy at Tufts University, Daniel C. Dennett, chides ‘The Singularity,’ the point at which AI surpasses human intelligence, as a surprisingly persistent urban legend. He further worries that far from man creating super intelligence that “The real danger is basically clueless machines being ceded authority far beyond their competence.” (Edge.org)
The website, Edge.org, contains a remarkable dialog among some of the world’s brightest and most accomplished thinkers. Edge offers a variety of topics and formats for discussion and the annual question draws considered responses from around the world. The 2015 question, “What Do You Think About Machines That Think?” drew hundreds of responses and spanned the range from alarmed and cautious visions of a future where man loses the top rung of the intellectual ladder and cedes control to artificial super intelligence. On the other hand, a number of voices doubt that we will see the emergence of super intelligence, rather, more likely man will cede authority to non-thinking machines that risk the weakening of human intelligence. Personally, I find the references to Darwinian evolution driving the rise of artificial super intelligence to be a more crucial concept to ponder. I find the necessary ingredients for developing a super intelligence to be the drive to survive and procreate and curiosity. If such a drive and curiosity can be engineered into AI, then I think that it will be just a matter of time before we see true artificial super intelligence. Time will only tell where thinking machines will go, but everyone not just the great minds should engage in the debate to participate in our evolving relationship with machines. If you are interested in hearing more, you can find the Edge article here.