Don't Trust Her Face—AI Crime
Photo Source: Pixabay
Artificial intelligence has powerfully enhanced law enforcement's capabilities to combat crime through new tools to identify criminals with facial recognition, thwart human trafficking, and catch credit card theft in near real-time. On the other end of the spectrum, criminals have also found many innovative artificial intelligence applications to further their unlawful ends. Recognizing the variety and danger of illegal activities made possible by AI, a group of concerned people, including law enforcement, government, military, academic, and private sector representatives, set out to identify actual and potential illicit uses of AI. They published their work in the open-access journal, Crime Science, under the title "AI-Enabled Future Crime" in August 2020.
The authors of "AI-Enabled Future Crime," Caldwell, Andrews, Tanay, and Griffin, identified twenty different categories of AI-enabled crime and ranked the risk of harm. The risk of harm was defined by the following criteria: "expected victim harm, criminal profit, criminal achievability and difficulty of defeat." The categories ranged from low-risk forgery and burglar robots to medium risk threats, including tricking facial recognition and autonomous attack drones. The authors found that the three highest-risk threats are as follows:
1. Audio/Video impersonation
2. Driverless vehicles as weapons
3. Tailored phishing
These categories scored highly for victim harm, criminal profit, achievability, and difficulty to defeat. Each type represents a threat to society and a new challenge to law enforcement.
Audio/video impersonation has reached higher levels of believability in recent years while also becoming cheaper and more accessible. The high fidelity and low cost of AI-driven impersonation mean that criminals or terrorists can now release false content that at first glance can fool an unsuspecting audience. For example, an AI-driven audio or video impersonation of a politician saying something very inflammatory just before an election could have an irrevocable influence on the outcome. Even if the fake statement gets quickly refuted, people find it hard to unsee or un-hear disturbing statements. More deeply concerning, audio/video impersonation poses the greatest threat to society because it could undermine people's trust in media and seriously damage the way we communicate as a society.
The researchers suggest that weaponizing autonomous vehicles poses the second-highest risk to the public. Much press has followed the remarkable advances in the development of driverless cars. Artificial intelligence, advanced sensors, and engineering now allow for hands-free driving available today with Tesla electric automobiles, and the development of true driverless cars by Waymo (Alphabet subsidiary), Honda, and Argo AI (Ford and VW) continues at a rapid pace. (analyticsinsight.net) The benefits of safety and convenience from driverless cars appear self-evident, but a long grim history of using cars for delivering bombs, trafficking drugs, and moving weapons has a potential new chapter with autonomous vehicles. The authors of "AI-Enabled Future Crime" suggest that autonomous vehicles make it possible for even solo actors to carry out multiple attacks without the overhead of recruiting multiple drivers. Moreover, criminal activity such as trafficking with autonomous vehicles creates distance between the criminal and the crime. It appears that autonomous vehicle makers need to consider such scenarios and devise safety systems that would make such applications for autonomous cars very difficult to do.
Finally, the last high-risk category for the illegal use of artificial intelligence relates to and activity called tailored phishing. Tailored phishing refers to the con game whereby criminals try various techniques such as impersonating some trusted authority like a bank, government agency, healthcare provider, or other business to trick people into giving away sensitive information. Such information includes passwords, social security numbers, or health information that a criminal can use to steal people's money or identity. Just as Netflix uses data from its customers and artificial intelligence to build powerful tools to recommend movies or TV shows, artificial intelligence can use a person's online activity and search history to build more personal and convincing scams. Now more than ever, people should look with great skepticism at any email, phone call, or text that asks them for personal information.
The benefits of artificial intelligence increasingly impact our lives in every dimension from increased convenience and security to safer cars and cheaper shipping. Law enforcement has benefited from artificial intelligence with better facial recognition to identify criminals, predict criminal activity, and thwart human trafficking. However, artificial intelligence has also opened up new possibilities for villainous actors to better disrupt, terrorize, and steal from the public. In an attempt to understand the new threats artificial intelligence poses, a group of concerned individuals from many sectors including security, the government, and the private sector performed a multi-stage analysis of the new threats criminals pose with the help of artificial intelligence. The highest threats mentioned above truly pose a threat to our safety and wellbeing. But the use of technology to make fake statements and video poses a more serious threat to our relationship with the truth. If we cannot trust what gets broadcasted as news, we will lose trust in our authorities and even each other. Be skeptical of the outlandish and double-check the source of outlandish statements and videos before forwarding them to others.
Dr. Smith’s career in scientific and information research spans the areas of bioinformatics, artificial intelligence, toxicology, and chemistry. He has published a number of peer-reviewed scientific papers. He has worked over the past seventeen years developing advanced analytics, machine learning, and knowledge management tools to enable research and support high level decision making. Tim completed his Ph.D. in Toxicology at Cornell University and a Bachelor of Science in chemistry from the University of Washington.
You can buy his book on Amazon in paperback and in kindle format here.