Machine intelligence is transforming our world, but it can’t be trusted on its own…
The unprecedented expansion of machine intelligence, otherwise known as artificial intelligence, has had an immeasurable impact on business and society. But in future, the difficulty will lie in making sure intelligent systems act in the right way. Defining ‘the right way’ is only the first step, but what needs to happen to turn principles into practicality?
Ahead of her keynote at Disruption Summit Europe 2018, D/SRUPTION spoke to Nell Watson, educator and technology philosopher, to find out.
Earlier this year we asked whether or not it was possible to have a universal code of ethics for AI. The answer appeared to be no, given that global societies have wildly different ideas about what it means to be moral. However, Nell Watson, entrepreneur and machine ethics ambassador, begs to differ. She believes that ethics and morality are measurable, definable, and most importantly computable. By combining machine intelligence, machine economics and machine ethics, Watson hopes that humanity can thrive in an artificially intelligent world. But what do these terms mean, and what role do they have to play?
“Machine intelligence is a way of getting machines to express some kind of agency or ability to choose or influence or predict. Machine intelligence has been a tremendous advantage for those typical Silicon Valley companies that have lots of data and a lot of people working on algorithms. Similarly, machine economics is an incredible way of aligning incentives through tokens, reinforcements, and giving rights,” she explains.
“Machine ethics is going to give a bit of heart and soul to all kinds of business systems. With machine ethics, we can begin to interpret the things that are truly valuable to people to provide them with experiences that are beneficial and transformative within their lives.”
Machine ethics, says Watson, can help to create business structures that are fair, equitable, and kind. But how do global organisations achieve machine ethics? One approach has been to create a set of principles to guide the application and use of AI, but in Watson’s mind they are more of a reassurance and not an actionable strategy.
“There are a lot of organisations out there that have constructed principles on how we ought to interact with machines, or how machines ought to behave, in a very general and abstract sense,” she says. “But principles are not implementable. We tend to reinforce children’s praiseworthy behaviour, and we need to give these similar sets of examples to machines as well.”
The trouble with trust
As much as it has improved our lives, Watson feels that machine intelligence can’t be trusted on its own. This is a concern that runs throughout the technological community, especially given AI’s upward trajectory. So, in 2015, Watson cofounded EthicsNet, a non profit organisation that empowers socially aware machines. Intelligent platforms are trained on data, so EthicsNet aims to create a dataset full of examples of prosocial behaviour to socialise software. This has become especially important given that machines already act as our agents in a variety of roles. They carry out administrative tasks, represent our interests, and they can even make phone calls for us too. These roles will increasingly be carried out in social environments, which means that machines need to act in a socially acceptable way. Watson’s goal is not just to look at artificially intelligent applications, but to take a more general approach that socialises smart systems to react appropriately.
“If we are to trust machine intelligence, and use it in appropriate ways, we need things like the triple entry ledger from blockchain to illustrate who has ownership of data, or to rescind access to that data if the system starts doing something erratic. With machine ethics, we can model agents. We can spot, using automated processes, whether a given agent is acting in a way that seems suspicious, or untrustworthy, and we can teach systems how to act.”
Instead of thinking about machines as machines, perhaps it’s time to take a new approach that regards technology in the same way as an animal or even a young child.
“It’s the same as not wanting to keep a dog on a chain. We’d rather have that dog follow us around happily,” says Watson. “This is an important component so we can work with machines not by keeping them on a chain, but rather by socialising them, taming them, and getting them to be, in a sense, our friends.”
Getting to the heart of it… Literally
Making sure that systems are well socialised will become all the more important when they are no longer just our agents. In 25 years, Watson envisages the development of AI co pilots that exist within our bodies.
“There have been experiments done where MIT scientists have put a liquid computer into the bloodstream of a cockroach, giving it an onboard computer. There’s no reason why, with a bit of optimisation, we couldn’t carry our smartphones not within our pockets but within the very fibre of our being,” says Watson. “Pretty soon we’ll have AI co pilots running inside our bodies, powered by our blood sugar. They will have access to our entire nervous system and understand our emotions from inside us. There’s a chance the co pilot may not even realise that it isn’t you.”
Fortunately, there is time to establish and apply ethical datasets to machine intelligence before it begins to have tantrums in our bloodstreams… But it’s worth being aware of just how powerful the technology will become. By combining machine intelligence, machine economics and machine ethics, Watson is confident that humans can coexist with artificially intelligent systems. While there have been various principles put in place to guide the application of machine intelligence, machine ethics is the key to technology that benefits rather than bludgeons humanity.
To hear Nell Watson speak at Disruption Summit Europe 2018, register here.