Artificial Intelligence, Artificial Ethics?

Building ethical AI is the next developmental stage for morally conscious companies

Question the ethics of AI, and you quickly find yourself down a rabbit hole of complicated considerations. Making and maintaining an artificially intelligent system that can uphold a moral code is a multilayered issue – for starters, how do you define morality itself? The late Professor Stephen Hawking famously predicted that the development of super intelligent AI would lead to the downfall of humanity. Avoiding this fate could hinge on how we build even the most rudimentary AI systems. According to Harry Armstrong, Head of Technology Futures at Nesta, the innovation foundation, organisations first need to understand the ethical impact of ‘narrow AI’. But what does ethical AI look like?

Making AI moral

Nesta was set up at the end of the 1990s to support innovation for social good. Armstrong’s role within the charity is to examine the social impact of emerging technologies like AI, and assess the potential benefits, opportunities and threats. While he believes that discussing super intelligence has value, the next couple of decades will focus on the use of narrow AI. Establishing the ethics of narrow AI is very different from questioning the tenuous moral compass of an all powerful, sentient machine.

“If we’re talking about narrow AI, then we can’t talk about it being ethical itself. When we talk about ethical AI, we’re really talking about the ethical design and use of the technology. It’s about what people do with it,” says Armstrong.

In other words, if the AI that currently powers many products, services, supply chains and organisational operations is to be ethical, then humans are entirely responsible for making it so. Luckily, businesses and governments seem to be working towards establishing codes of conduct. Nesta, for example, has worked with the British Standards Institute (BSI) to explore how standards can be used to encourage best practice.

“We need to actively shape the discussion of what is good and what is bad,” says Armstrong. “Much wider consideration is needed in terms of regulations for machine learning systems. The Centre for Data Ethics and Innovation that the government is setting up will hopefully start to do some of this work.”

AI is not always the answer

While discussions about the ethical reliability of narrow AI continue, should the technology be prevented from making certain, life changing decisions? Despite debate over the appropriate application of Artificial Intelligence, there are various case studies where AI clearly augments human decision making. Armstrong believes that the best examples of ethically designed systems come in the form of predictive analytics for child services, pioneered by Rhema Vaithianathan at the Centre for Social Data Analytics.

“This is about predicting when a child might be more at risk within a situation and whether social services should step in,” he says. “When we create these systems, we need to make sure they don’t cause an automation bias, which is a bias towards what the machine is saying. They need to support rather than detract from existing services. Transparency is a key issue, and being clear about what model you’re using, what data you’re using, and how decisions are made.”

Unfortunately, by their very nature, the complexity of some machine learning tools make them very difficult to interrogate. While work is being done to improve the interpretability of deep learning systems, companies still use black box variants where decision making can’t always be reliably tracked. Another difficult question, but one that needs to be asked, is how far businesses can be trusted to create AI that works for the many instead of lining the pockets of the few.

“There is definitely a willingness and a keenness by big companies to use AI ethically. But you still have to remember that private incentives are not always aligned with broader public groups, which is partly why we need regulation. While there is a willingness there, there is a certain lack of knowledge about ethics, and about how to measure impact. I would say that we should never ever solely rely on private companies to self regulate, or trust them to always consider the wider impacts of what they do.”

Nevertheless, Armstrong believes that there are ways to overcome these issues, including a commitment to diversity when it comes to shareholders and board members. Businesses also need to think very carefully about what they want to gain by using artificially intelligent tools.

“If you don’t need to use the technology, then you shouldn’t,” he says. “For example, when it comes to driverless cars, we’re definitely not at the point where we can rely on the technology and I don’t think we will be able to for the next 10 to 15 years at least. There are a lot of situations where even very simple statistical models are more than adequate to solve a certain issue.”

Rage against the intelligent machine

While AI might not be ready (or even appropriate) for certain tasks, experimentation is an important part of creating functional systems. Developers need to carry out trials, and this means gradually integrating the technology into real world scenarios. However, it’s not difficult to imagine people treating intelligent machines in a negative way. As strange as it sounds, humans arguably have a moral responsibility to treat technology well. This includes protecting it from the influence of malicious cybercriminals – imagine, for instance, if poor software security meant that a self driving car was hacked and reprogrammed to recognise stop signs as ‘go’. On another level entirely, it can also involve using simple manners.

“If you start being abusive to Alexa and treat it as a servant, the bigger issue is how it then builds negative interactions in human relationships,” says Armstrong. “I think we should be respectful of machines because they are human like, and we shouldn’t dehumanise something that has human characteristics.”

Morality is a central issue when it comes to AI, but not just for the super sentient software that could one day take over the human race. Today, countless organisations are in the process of adopting and understanding narrow AI. Armstrong suggests that this initial application is what discussions should focus on, and with good reason. Not only will it help to create technology that serves a social purpose, but if ethical development fails at this vital first hurdle, then the singularity could be much closer than we think. Perhaps the next time you ask Alexa for a recommendation, you might want to say please.

How can businesses follow ethical procedure when using AI? Can we trust big tech companies to pursue ethical policies? Is it possible to have a universal moral code for AI? Please share your thoughts.

For more insights about the progression of AI, sign up to our free newsletter.