How AI Becomes Biased
Photo Source: Pixabay
Bias hurts when it negatively affects your life. Negative bias in hiring, receiving a loan, getting accepted to a school, being diagnosed at the doctor’s office, and such robs people of their dignity and a feeling of equity in the world. Research into human bias over the past forty years has revealed a variety of biases that people exhibit without even thinking about it. One type of bias called availability bias described in research of Amos Tversky and the Nobel Prize winning economist Daniel Kahneman, first published in 1973, specifies the mental short cuts people use when faced with a decision. Their research showed that people would make a judgement about someone based on how easily they remember people in a similar situation. In one example, the authors use in their publication relates to a mental process of predicting if a couple will get a divorce. The research showed that if the couple in question looks or acts similar to another couple that you remember that had a divorce, you will judge a divorce to be more likely for the married couple, but if you do not easily recall such a memory, you will not predict divorce for the couple. In other words, we tend to judge based on how easily we remember a similar situation, which may not be genuinely relevant at all. Another type of bias called confirmation bias refers to people’s tendency to select evidence that supports the beliefs they already hold. For this reason, the discussion about fake news remains such a vital topic. The tendency for people to believe and share information that supports their beliefs even in the face of conflicting information sits at the heart of the influence of social media driving public opinion. Hunt Allcott and Matthew Gentzkow, professors of economics at New York University and Stanford University respectively, published an article titled, “Social Media and Fake News in the 2016 Election.” In their report, they note how groups will share stories that support their beliefs creating an information bubble, or echo chamber. Social media supports the echo chamber by offering similar or supporting information among friends and like-minded people.
Computers often have a reputation for being cold calculators, devoid of bias. In many respects, the reputation fits. When a computer performs calculations, without question it produces the same results every time. It will not vary. However, computers used for artificial intelligence can produce biased results. Anuranjita Tewary, the Chief Data Officer at Mint and the founder of the information technology company Level Up Analytics, found when she worked at the social networking company LinkedIn, their algorithm displayed high salary jobs much more frequently to men than to women. She determined that the developers of the algorithm and most of the initial users were men, which trained the program to favor men or be biased toward men. One of the key elements to artificial intelligence is that artificial intelligence centers on the fact that the computer needs to learn, and it learns much like people do, by repetition. For example, to train a facial recognition program, the computer needs to analyze tens of thousands, even millions of faces to learn how to recognize a face. Here is where bias can creep into an artificial intelligence program. If the faces used for the training do not represent the population, the algorithm will display a bias to what faces it can recognize. A graduate researcher at the Massachusetts Institute of Technology Media Lab, Joy Buolamwini, along with Timnit Gebru of Microsoft, analyzed several facial recognition programs from companies such as IBM, Microsoft, and Face ++ to determine if the different systems showed bias. She found that all the programs showed a bias in race and gender determination. Moreover, the algorithms failed most often when attempting to classify darker-skinned females (up to a 34% failure rate). The facial recognition algorithms performed best on categorizing light-skinned males which also correlates with the high proportion of light skinned-male face images in the training set. Such results should make any group from businesses to government and academic institutions take a very hard look at data selection when training an artificial intelligence system to combat bias in the results.
Research into human thinking uncovered a number of biases in people including selection bias and confirmation bias. Such biases can lead to inappropriate conclusions and inequitable decisions. Although computers often hold the reputation of being cold and unbiased, research demonstrates the artificial intelligence systems too can become biased unless great care goes into selecting the data that trains the system and analysis of the results to determine if bias has worked its way into the system. Microsoft learned the hard way that uneven human input can make an algorithm biased in a very bad direction when they released their twitter-based chat bot named Tay. In less than twenty-four hours after launch, Tay needed to be taken down because it had learned from specific tweets to become a racist and a misogynist. Properly designing the AI and feeding it the right information remains critical to reducing bias and making AI that works for everyone.
Dr. Smith’s career in scientific and information research spans the areas of bioinformatics, artificial intelligence, toxicology, and chemistry. He has published a number of peer-reviewed scientific papers. He has worked over the past seventeen years developing advanced analytics, machine learning, and knowledge management tools to enable research and support high level decision making. Tim completed his Ph.D. in Toxicology at Cornell University and a Bachelor of Science in chemistry from the University of Washington.
You can buy his book on Amazon in paperback here and in kindle format here.