Artificial Intelligence, Lies & Trust

In Discussion with Dr Louis Rosenberg & Calum Chace

According to Louis Rosenberg, the founder and CEO of Unanimous AI, one of the defining turning points in the evolution of AI was the famous moment that Deep Mind beat the reigning Go champion. Whilst its predecessors had been good at playing games, what Deep Mind was good at was playing people. This ability to predict how people will act makes it an incredibly useful tool, so it’s no wonder that AI is gradually creeping into our lives. Often, this is without us even realising, crunching data, providing insights, and turning them into useful information. On the one hand, this is massively beneficial to humanity. But on the other, it opens up a recurrent and incredibly important question – can we trust AI?

Artificial Intelligence, artificial morality
Worrying as it is, AI can lie, and it’s better at it than we are. But what if we could install a sense of morality into these systems? Calum Chace, AI expert and author of science fiction novel Pandora’s Brain, says that machines can be given a set of rules. Having morality, however, requires consciousness –

“We currently have no idea of how to generate consciousness, and it is quite possible that consciousness will not even be present when we create a superintelligence. Maybe consciousness is an inevitable by-product of a certain level of intelligence, maybe not. So, we can’t answer this question yet.”

Even so, as autonomous machines become more common, developers will face this dilemma time and time again. Louis Rosenberg says that the developers of self driving cars already make these considerations –

“The developers of self driving cars have this exact issue, which is that a self driving car has to make a moral decision. If a pedestrian walks into the road, should the car swerve and put the driver at risk, or should it not swerve and put the pedestrian at risk? These are moral decisions, and someone’s going to have to programme them. There are people working for self driving car companies who are basically coming up with morality.”

It might be a while before developers can agree on how to programme these moral decisions, but in the meantime the creation of regulations could help to control the development of AI. By policing artificially intelligent systems, corporations and governing bodies could be discouraged from using them in underhand or questionable ways. The more transparent software is, the more trustworthy it becomes. Louis says that a set of rules would be helpful in commercial markets.
“For consumer products, I think you can have standards. Companies will comply because it’s part of normal commerce. And if they don’t, there will be lawsuits. In normal commerce, standards will help keep AI safe.”

Organisations like the Partnership on AI and OpenAI could be vital in helping to ensure benevolent AI. But how effective will these organisations really be? Chace seems sure that their involvement is positive –

“In 2015, we had the ‘three wise men moment’ when Stephen Hawking, Elon Musk and Bill Gates all said the same thing, namely that strong AI is coming and it will either be the best or the worst thing ever to happen to humanity. Sub-editors couldn’t resist the temptation to attach endless pictures of the Terminator – and who can blame them?” he says. “The tech giants which are developing AI were a little annoyed by this, and stopped talking about AI in public. The Partnership on AI signals their return to the debate.”

According to Chace, the development of AI provides enormous benefit to humanity despite the challenges that it will raise, “We need all our best brains engaged in the process of meeting these challenges, and that certainly includes the people developing cutting edge AI.”

The killer question
And so, even with safeguards, standards and the support of Silicon Valley, can we ever create trustworthy AI?

“We have to!” says Chace. “An artificial general intelligence (AGI) which goes on to become a superintelligence that we cannot trust would be terrifying. Making sure that the first superintelligence really likes humanity, and understands us better than we understand ourselves, is the most important project for humanity this century. We already have a small number of very smart people working on this project. I am hopeful that they can succeed, but I confess to a case of temperamental optimism.”

Louis Rosenberg, is decidedly less optimistic, “The answer is to assume you can’t trust an AI until it proves that you can,” he says. “There will always be bad actors who will take any technology and use it in negative ways. Nobody can control that. It’s just like a terrorist can turn a car into a weapon – the same thing will happen with AI.”

So, in contrast to law courts worldwide, it looks like we should assume AI is guilty until proven innocent. Not necessarily because the technology itself is inherently bad – but because anything is dangerous when put in the wrong hands. Both Chace and Rosenberg are certain that we need more people working towards ensuring benevolent, standardised AI. This includes encouraging the use of transparent and accountable systems that can trace decision making processes. With increased consumer pressure and the expansion of regulatory bodies, perhaps corporations will become more open about the systems that they use. As AI becomes part of our everyday lives, ensuring integrity will be a difficult but necessary task.

Would you trust intelligent software? Is it possible to ‘programme’ morality? Will whoever controls AI control the world? Comment below with your thoughts.

– Dr. Louis Rosenberg & Calum Chace both took part in our Disruption Summit Europe – you can take a look at their videos here.