AI risks bias, unconscious or otherwise. . .
Imagine if you were a native English speaker with two degrees in history and politics both obtained in English to be told by a machine you cannot speak English well enough to stay in Australia.
That is exactly what happened to Louise Kennedy, a native English speaker, who has been working in Australia as an equine vet on a skilled worker visa for the past two years. Whilst I am not qualified to make a judgement on the case I do know that if we thought that AI was objective we could not be further from the truth.
The reality with any system that learns, is that the output is inevitably determined by the data it receives and the individual’s that programme it. Thus artificial intelligence based machines are inherently learning human biases.
The problem with this is that as technology spreads to critical areas of our lives such as medicine, law and finance the bias is not only transferred from humans to machines but the machine can amplify the bias to an even greater degree than the dataset fed to it in the first instance.
This is particularly true of interaction bias. A good example is that of Microsoft’s Tay, a Twitter based chatbot designed to learn from its interactions with users. Unfortunately, Tay was influenced by a particular user community that taught Tay to be an aggressive racist misogynist.
Luckily Tay only lasted 24 hours but it is a lesson for everyone: systems learn from the biases around them and reflect the bias of the people who train them.
Another example, only a couple of years ago, is Google’s photo app, which applies automatic labels to pictures in digital photo albums. This accidentally labelled images of black people as gorillas. Google made a public apology; it was unintentional but so is unconscious bias. It does not detract from the damage it can cause.
If a system is trained by largely white males, it will have a harder time recognising non-white faces, albeit unintentionally.
Can we neutralize this phenomenon?
Can you correct software where bias is basically baked into the original data or are we now set for a future where AI makes biased decisions in many aspects of our lives but just faster and even more efficiently than humans?
There is a Partnership of Artificial Intelligence to Benefit People and Society in the US, which attempts to look at issues of fairness, ethics and inclusivity within AI. But ethics are also culturally driven and complicated to unravel.
Some people are calling for an AI watchdog to be established because of the complexity that bias can create in the same way there are currently watchdogs for people who are discriminated against.
What we do know is that AI needs to work in conjunction with human beings who understand bias, even if, bias itself is structured, ingrained and baked into our world.
In addition, many of the most powerful emerging machine learning techniques are so complex and opaque in their workings that to apply careful examination may defy us. However, it is no secret that most people in the sector are white men. Unconscious bias is one of the primary drivers of this disparity, which has led many of Silicon Valley’s leading tech companies to introduce unconscious bias training to their employees, but just as tech companies are educating employees about their own unconscious bias they also need to educate them about biases in the models they are building.
Companies should be required to explicitly test machine learning models for discriminatory biases and moreover publish their results. If there are any useful methods and datasets for performing such tests they need to be shared and reviewed before we become immune to bias in AI and accept it as a ‘reality’.
Diversity should be incorporated at every stage of the AI process, from design and deployment to questioning its impact on decision making. Research has shown that more diverse teams are efficient at problem solving, regardless of cumulative IQ. Artificial intelligence is at an inflection point. Its development and application can lead to unprecedented benefits for global challenges such as climate change, food insecurity etc but if not carefully managed it could lead to an even more discriminatory society. It is not a surprise that it is a black female student who has set up the ‘Algorithmic Justice League’ to promote inclusive coding.