Interview, James Duez, Rainbird – Making AI Accountable

If you know why and how a decision is made, you can trust it… Can’t you?

If artificial intelligence makes a decision that affects us, we need to know how it was made. The rush to adopt AI has certainly improved process efficiency, but the enthusiasm for the technology has meant that businesses have implemented intelligent systems that they don’t fully understand. Not being able to find out how or why a decision was made is unhelpful at best, but can also have serious implications for consumer facing businesses. Thankfully, there is now a distinct push to make AI more accessible and accountable. Rainbird, a rising AI powered automation platform, aims to do exactly that. D/SRUPTION spoke to James Duez, cofounder and executive chairman, to find out how.

Accountable AI

Through Rainbird, businesses can automate complex decision making. The AI powered platform replicates the process that human experts across industries go through when making decisions.

So how does it work?

“If you want to make decisions that impact customers, you really need to understand how those decisions are reached. Whereas simple rules engines can be audited, they only make simple decisions, and machine learning can make very complex decisions but it’s black box and it can’t be audited. Rainbird is that holy grail that sits in the middle,” says Duez.

Rainbird, he explains, is a piece of technology made up of two halves. One half is an editor that enables business people to encode their own knowledge into a system, letting them build models called knowledge maps. These maps are more powerful than traditional decision trees or rules engines. The second half is the ‘run time’, which is an engine that takes knowledge maps and connects them to various different data systems. Because of this, the judgements that Rainbird makes are explainable.

Unlike the majority of advanced, artificially intelligent solutions, Rainbird can be used by any business person – not just software engineers – to make complex decisions at scale. Through the Rainbird University, clients can learn to capture what they know and build systems that automatically make similar judgements. This ultimately means that they can delegate even the most challenging decisions to a system that is both trackable and transparent. In other words, it’s technology you can trust.

What’s AI anyway?

Any conversation about AI has to begin with the inevitable and seemingly unanswerable question: What is AI?

“If we’re going to use the term AI, it needs to describe a computer performing something that a human struggles to do, or that we thought only a human can do,” says Duez. “AI is everything that’s on the fringe of innovation, but that hasn’t quite proven itself yet. When it does start to work, it becomes the next generation of computer science.”

It seems that most people are still understandably uncertain about what AI is and what it really means. As such, there has been excessive and often inaccurate use of the term. In some ways this could be seen as damaging, because it further complicates what is already a complicated concept. Duez points out that the hype surrounding AI has been largely beneficial, encouraging research and development. Even so, he believes that AI as a phrase will once again go out of fashion.

“Five years ago, McKinsey ranked the top 10 most disruptive technologies in the world,” says Duez. “Was AI mentioned? No. But five of six of the technologies mentioned the word automation. We didn’t use the term AI then – we focused on what will impact the economy, and that’s automation.”

So, AI might fall out of use, but words like automation, machine learning, deep learning, and neural networks will still be useful descriptors as the technology fuels the next generation of computer science. In such a wide field, it’s important to make the distinction between narrow AI (which refers to the automation we are seeing today) and terms like AGI (Artificial General Intelligence). In Duez’s mind, AGI may be interesting from a research point of view but requires a mass shift that may never happen.

“All AI that is effective tends to be narrow. The output tends to be tools that people use to make them more efficient, and frees up the individual to innovate, deliver better customer service and build a rapport with them.”

Automation isn’t all about robots

Regardless of how you define it, today’s narrow AI is augmenting the human workforce in more ways than one.

“We are absolute advocates of the principle that the winning model is the right balance between people and technology,” says Duez. “It’s about better efficiency, which is why we’re not seeing the wholesale job losses that McKinsey were predicting five years ago.”

That said, Duez doesn’t believe that all AI needs to work with or rely on humans. Instead, he sees it as a question of trust and reliability.

“If you want to build a system that automatically does a job, you need to ensure that you can trust it and understand it. Look a simple form of automation – Anti Lock Breaking systems (ABS). I remember having to pump the brakes to make the car slow down when you were braking. Then some bright spark came up with a physical machine that does it for you. That is a piece of technology that people have learned to trust,” he says. “Now, project forward to autonomous vehicles – the poster child for this whole debate about ethics and liability in AI. There are huge issues that have to be resolved, but these vehicles will continue to augment us.”

If humans can trust an AI powered system, they can use it to improve whatever it is they are doing, and they can do so without necessarily thinking about it. Apply this to a corporation and the benefits are clear, taking away stress and reducing mental workloads.

The regulatory challenge

Regulating any advanced technology is not easy. Regulations are made to drive down risk, but the risks are impossible to predict. However, alongside the support for ethical AI has come a backlash against black box systems, which describes AI powered software that can’t explain how it makes decisions. Undoubtedly nudged by a certain data protection initiative, regulatory authorities are cracking down on opaque AI.

“After GDPR back in May, some of these systems that businesses relied upon have had to be turned off. Many credit control systems are black box, and banks have had to turn them off because their company can’t explain what they are,” says Duez. “There are lots of businesses trying to work out how to make machine learning explainable so they can switch them back on again.”

The more that organisations recognise the need to protect consumers, the more important it is to be explainable and accountable. Should it be a requirement that all of these systems, whatever we call them, can show how they make decisions?

“It’s critical that systems explain themselves where somebody else has been affected by the output. Any AI that makes decisions about our health, our finances, or access to services absolutely has to be explainable, otherwise it’s not responsible or ethical. The whole thinking around standards and regulations has to advance as quickly as the technology does.”

At the moment, there are a handful of organisations (such as EthicsNet) that aim to promote discussion and positive practice around AI. However, for many reasons, there is no single authority to police its application. Instead, each industry is beholden to sector specific bodies.

“There are different regulations for different environments,” says Duez. “An example would be the Financial Conduct Authority (FCA) because they regulate financial services. They have engaged with AI through a sandbox process. For example, financial businesses who want to develop tools for chatbots can do that through a regulatory sandbox, where the implications are limited and constrained.”

Across organisations, the tide continues to move towards protecting consumers from black box machines. The motivation is clear – without accountability, AI becomes damaging and deceptive. This is something that has been duly recognised by regulators and AI enthusiasts alike, forcing a shift in the adoption of the technology. Now, companies are expected to abandon black box systems in favour of explainability. Any company that acts on artificially intelligent decisions without a firm understanding of why those decisions were made is running a high risk of losing its credibility. Interestingly, when it comes to trust, accountable AI isn’t just on par with human decision making…

“If you ask Rainbird how it came up with a particular judgement it will bare its soul – that’s something you’ll never get a stock broker or a banker or a even doctor to do. That’s why there’s a future in these technologies, because they are transparent, and people aren’t.”

For D/SRUPTION’s latest insights straight to your inbox, sign up for our free newsletter here.