Deepfake, the Disappearing Truth
Photo Source: MaxPixel
(This post was originally published on August 29th, 2018.)
The pervasive availability of continually improving computer-based tools for the alteration of photos, video, and sound provide the unprecedented potential for the manipulation of the truth. Since nearly the beginning of photography, which Nicéphore Niépce introduced in 1824, photographers learned to manipulate photographs. Sometimes the photographer would change the image through techniques such as retouching, piecing together several pictures, double exposure, and airbrushing. Famously, an iconic photograph of Abraham Lincoln from the 1860s that formed the basis of his image appearing on the first five dollar bill consisted of not one but two photographs put together. The portrait consists of Lincoln’s head placed on the body of another politician of that time by the name of John C. Calhoun (Harry Farid, “Photo Tampering Throughout History”). History contains many examples of images manipulated to change the message or influence the viewer’s impression of the image. Stalin routinely had his enemies airbrushed out of his photos. In another case, the cover of The Economist from June 2010 depicting a lone President Obama with an oil platform looming in the background inspecting the damage from the BP oil spill in the Gulf of Mexico turned out to be manipulated for dramatic effect. Later it was revealed that the original photograph contained two other officials standing next to the president behind some police tape near the shore some steps away from the water.
Advances in artificial intelligence and improved computing power today make much more than photographic manipulation possible. Such advances spawned a term that combines the two terms “deep learning” and “fake” into one word, deepfake. Deepfake represents a technique whereby a computer learns through video all the details of a person’s face, body, and voice and can transpose those details onto a video of someone else. In essence, deepfake computing can make an artificial video of one person saying and doing something they never did. The author Nick Gillespie details on Reason.com an application available to the public that transposes one person’s face onto another’s in a video. The article describes how the software allows a user to go through the process of making a modified counterfeit video without having to be a computer expert. Such counterfeiting and manipulation still does not look pixel perfect; however, in another breakthrough in computing from the University of California, Berkley has taken video manipulation beyond transposing just a face. In their article titled, “Everybody Dance Now,” pre-published on arXiv, Caroline Chan and others describe an algorithm they developed that transposes dance moves from a source video of an expert onto the body of someone else in another video (see the video example below from the Berkley Engineering school). Essentially, the program described as a “do as I do” motion transfer can make an amateur dancer look like a pro. Of course, the implications go far beyond dancing to manipulating other types of video to possibly make artificial videos of celebrities, officials, or anyone else.
The manipulation of photographic images to improve or alter the picture dates back to the very beginning of photography with early examples such as the transposition of Abraham Lincoln’s head on the body of another politician striking a good pose. Such manipulation may smooth over facial or body flaws for fashion or advertising campaigns or make an image more impactful such as the cover photo of the lone president contemplating the BP disaster. Recent advances in artificial intelligence have significantly advanced our ability to manipulate not only static images but live action video termed deepfake. Deepfake allows the transposition of the face and actions of one person in one video onto that of another person in a different video. Such manipulations can make videos that appear to show someone performing acts that they never did. At the lightest end of the spectrum, one can imagine funny videos of friends doing something that they could never do such a dancing like a professional. However, the darker implications involve manipulating videos of politicians or celebrities to show them performing counterfeit acts or saying something they never did that would have a negative impact on their careers or public perception. Such manipulation can have lasting effects even if proven later to be false. In an impressive set of studies by researchers at Victoria University in New Zealand detailed in “When Photographs Create False Memories,” the authors describe how using false photos of people in a hot air balloon ride in which they never participated, fifty percent of the test subjects years later recalled the hot air balloon ride as if it had happened. The power of images and our ability to manipulate them will drive more skepticism of the media, but it should also drive a much stronger emphasis on ethics in every profession and learning institution to push back the temptation to manipulate images and video for unscrupulous ends.
Dr. Smith’s career in scientific and information research spans the areas of bioinformatics, artificial intelligence, toxicology, and chemistry. He has published a number of peer-reviewed scientific papers. He has worked over the past seventeen years developing advanced analytics, machine learning, and knowledge management tools to enable research and support high level decision making. Tim completed his Ph.D. in Toxicology at Cornell University and a Bachelor of Science in chemistry from the University of Washington.
You can buy his book on Amazon in paperback and in kindle format here.