I Didn’t Say That!
Photo Source: Flickr
(This post was originally published April 15th, 2018.)
Privacy continues to gather attention and discussion today perhaps because technology continues to develop ways to erode our privacy. The concept of privacy emerged thousands of years ago and is a fundamental human concern. For example, a work titled, “Patterns of Sexual Behavior,” by Clellan S. Ford and Frank A. Beach published by Harper in 1951 summarized research into sexual behavior among people in 190 different cultures from around the world. One aspect of the research looked at privacy during sex and found that cultures that had private rooms in their dwellings overwhelmingly preferred sex indoors while cultures without private rooms in their dwellings preferred sex outdoors, presumably where they can find some privacy away from the group. Privacy extends beyond intimate interactions to the privacy of our information and actions. In an article published in 1890 in the Harvard Law Review titled, “The Right to Privacy,” the Boston attorney Samuel Warren and the renowned Associate Justice of the United States Supreme Court, Louis Brandeis, walk through how common law evolved through the centuries to meet the needs of a changing society to include more protections of body and property. They continue their argument in the article to support the need for the law to protect individuals’ privacy and go on to argue that privacy laws need to be reviewed and addressed as technology progresses. The introduction of printed gossip columns with the attendant laws against liable and slander and the evolution of intellectual property support the concepts of privacy protection in law. However, the authors see these protections as insufficient. They argue that people have the right to be let alone. In other words, they write, “the existing law affords a principle from which may be invoked to protect the privacy of the individual from invasion either by the too enterprising press, the photographer, or the possessor of any other modern device for rewording or reproducing scenes or sounds.”
Graduate researchers at the Montreal Institute for Learning Algorithms (MILA) co-founded a company called Lyrebird that uses artificial intelligence to mimic a person’s voice. Amazingly, Lyrebird does not need hundreds of hours of audio recordings to capture someone’s voice. In fact, the company claims it requires only 60 seconds to mimic any human voice. Moreover, once obtained, the person’s voice can be made to say anything at all by merely typing sentences into a computer. The Lyrebird artificial intelligence mimics a person’s vocal and emotional intonations, creating eerily good audio snippets of someone talking. On their website, Lyrebird provides voice examples they synthesized with their algorithm of Donald Trump and Barak Obama speaking (https://lyrebird.ai/demo/). The voices still sound mechanical. The Trump voice sounds more realistic than the Obama voice, but the infancy of the technology suggests that the quality will only improve over time. Today, with the near instant transmission of information across social media, it seems possible that Lyrebird technology could be used to attribute false statements to anyone from whom you can get a recording of them speaking for 60 seconds or more. In fact, one of the co-founders, Alexandre de Brébisson, suggested that the technology may be used to “bring actor’s voices back from the grave.”
Humans need privacy, and the law continues to evolve to recognize privacy and the evolving technologies that further encroach on our privacy. Over one hundred years ago, Samuel Warren and Louis Brandeis agued the law protects privacy and an individual’s right to be let alone. They go further to point out that privacy includes protection from the intrusion by the press and “any other modern device for rewording or reproducing scenes or sounds.” Lyrebird represents a new technology that not only reproduces a voice but that voice can be made to say anything at all. Such technology should be carefully controlled because the possibility to attribute false statements to anyone represents a real danger both at the public and private level. It does not take much imagination to think of a scenario during something like an election cycle to attribute false audio statements to a candidate that may influence public perception. Even if proven later to be a false statement, the damage would already be done. Such technology should include some digital signature, like a serial number that lets people and other technologies know a computer synthesized the clip. More than ever, people should be skeptical of what they hear, especially if it only comes from one source.
Dr. Smith’s career in scientific and information research spans the areas of bioinformatics, artificial intelligence, toxicology, and chemistry. He has published a number of peer-reviewed scientific papers. He has worked over the past seventeen years developing advanced analytics, machine learning, and knowledge management tools to enable research and support high level decision making. Tim completed his Ph.D. in Toxicology at Cornell University and a Bachelor of Science in chemistry from the University of Washington.
You can buy his book on Amazon in paperback and in kindle format here.