Artificial Intelligence On Trial

AI used in legal trials. . . should it be given the same treatment?

Legal trials historically rely on a plethora of different steps before a conclusion is made. But for many cases heard in the US, it’s not just juries, judges and magistrates who contribute to the final decision. COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an artificially intelligent tool used as part of the sentencing process. It uses 137 different features to predict if a defendant will reoffend, and therefore influences the type and length of punishment. It makes sense to assume that this software had undergone strict trials itself. Worryingly, it wasn’t until this year that a paper published in Science Advances revealed that humans were better at predicting recidivist rates than the algorithm. Why has so much trust been placed in Artificial Intelligence?

A misguided COMPAS

Admittedly, it took 400 humans to match up to COMPAS. Even so, these humans used seven features as opposed to 137, and were two per cent more accurate. In a 2016 ProPublica investigation, COMPAS was also found to be twice as likely to (wrongly) predict that a black defendant would reoffend than a white defendant. These combined findings beg the question of why we have put so much trust in smart software and it’s easy to fall into the trap of accepting their outcomes.

Another factor is perhaps laziness vs efficiency. AI can sift through reams of data in a fraction of the time that humans can, so what does it matter if the method is unclear? Certainly useful when AI is picking out a record from a data set, but it becomes more problematic when it concerns life changing decisions.

Unfortunately for us, these decisions are already being made. Before the technology’s presence becomes ubiquitous in everyday life, should AI be put on trial itself? Can it really be trusted to make accurate, informed decisions.

In AI we trust

It’s certainly difficult for those outside the field to know exactly how AI and algorithms come to conclusions, and that gives an air of untrustworthiness. AI is gradually becoming part of more administrative processes including university admissions and financial decisions. How can a bank, for example, grant or not grant a loan using a piece of software that’s not fully transparent? Of course, we don’t always have to know how things work before they use them, for example medicines or smartphones. But these things go through rigorous testing, and it’s this that could help to build both trust and better understanding of AI.

In 2017, researchers at Google’s DeepMind used two games to test if neural networks better understood competition or cooperation. In the first game, the two opposing algorithms quickly became more competitive to win. In the second though, the software worked out that cooperation would lead to success. More tests like this will be needed to support the use of AI in sensitive matters like finance, the law, and healthcare. The obvious obstacles are the time, effort, and costs involved in recalling and testing every AI in use today. . . but if organisations were diligent in understanding AI as they are becoming towards protecting data, the task would be much less daunting.

Another way to improve trust in AI’s conclusions could be to equip it with human values, taking decisions beyond black and white logic. Technologists like Ray Kurzweil would perhaps argue that creating emotionally intelligent AI is a slippery slope, edging us closer to the singularity. Perhaps the most promising potential answer to the enduring issue of trust could be Swarm AI.

The Swarm AI technique was developed by San Francisco based company Unanimous AI, and augments human knowledge with artificially intelligent software. As the decision making process is neatly tracked in stages, controlled trials would be unnecessary. Instead of using AI as an additional tool, then, law courts could integrate it within human thought processes.

As much as we try to be neutral and fair, human decisions are laced with bias and prejudice. That’s in part why tools like COMPAS can be so useful, because they can apply logic without being influenced by emotion. But, if we’re going to use AI to inform decisions that change people’s lives, it needs to face up to scrutiny first. Sometimes, emotional considerations are also integral to important decisions. Merging AI with human intelligence via Swarm AI appears to be a useful method of matching human values with logic – and tracking each step in the process for accountability. As well as testing algorithms to find out if we can trust them, it’s also worth asking if we want to. If decision makers can view AI as a suggestions tool rather than an all knowing entity, it can enhance rather than dominate their verdicts.

Should AI and algorithms be subject to controlled trials before their practical application? Is Swarm AI the safest technique for AI decision making? Should AI be prohibited from influencing certain life changing decisions? Share your thoughts and opinions.