Should, and can, AI be held to universal standards?
You would hope intelligent human beings would be able to agree on core moral standpoints for technologies, and for some this certainly seems achievable. Discussing the ethics of AI, though, is far more complex than simply agreeing not to fly your drone into a person. To work within the real world, AI has to be aware of the nuances and particulars of specific societies. An AI system in a high surveillance country might differ from its equivalents in other parts of the world. Then, of course, there is ethical divide within societies. Despite the scope for possible points of contention, is there a chance that AI development could be united under one ethical banner? Andrew Burgess, AI strategist and consultant, Lee Howells, AI and automation expert at PA Consulting, and Louis Rosenberg, CEO of Unanimous AI, think not.
“I think that, whilst having a global code of ethics for AI would be a laudable ambition, it would be impossible to enact,” says Burgess. “There are, in my mind, two big challenges. The first is the diverse nature of AI technology: AI is a broad, catch-all term for many loosely related, but different, technologies, so trying to find a common code would either have to be too compromised and dilute or so complex as to be incomprehensible.”
The other main challenge, he explains, is the different cultural approaches to privacy and ethics across the globe. Rosenberg agrees.
“Personally, I believe it’s an impossible goal to have a universal ethical code for artificial intelligence. The reason is simple – we don’t have a universal ethical code for human intelligence. Our ethical norms vary by country, by generation, by socioeconomic status, by upbringing, and of course, by political leaning.”;
Rosenberg sees today’s highly targeted media as encouraging ethical divide. The scope for ethical disagreement is vast, and sensationalist headlines often exacerbate tensions. Another important point to remember, he says, is that ethics change over time.
“If we went back only 50 years in time, our ethical sensibilities would be very different. Imagine if the ‘universal ethical code’ for Artificial Intelligence was created in 1950, before views changed about civil rights, women’s rights and gay rights. Well, the same thing works going forward. If we build a universal ethical code for AI right now, what are the chances that people 25 years from now will think that it represents their sensibilities?”
Unanimous AI’s approach, therefore, is to build super intelligence systems that embed humans themselves. In order to reflect human values, humans must be central actors. Even then, though, there is the worry that humans aren’t going to get it right. Fortunately, Unanimous AI’s track record suggests otherwise.
Creating a code for code
There is clear scepticism on the part of experts. That being said, others have taken a more optimistic view. Lee Howells of PA Consulting agrees that a truly universal code is unlikely, but also points out that perhaps we are aiming too big, too soon and should start with a small set of simple principles. Howells says moves by the U.S., the European Union and the UK to do this are strong starts and, with work underway to align these and other efforts resulting in the Asilomar AI Principles, we are starting to see consensus. “The next steps need to be wider international involvement and agreement,” says Howells. “And then these principles need to be incorporated into regulation so that manufacturers are clear on the ground rules their AI products are to meet.” He added that “this is difficult because in looking at the ethics of AI we are really, turning a spotlight on humanity’s ethics.”
The Partnership on AI could help to form a basic set of guidelines, if not rules, for the ethics of AI. The global consortium of benevolent parties aims to work out the best practice for AI use. So, for argument’s sake, let’s say that coming up with a code of ethics is not an impossible task. What would need to happen to make it relevant and worthwhile?
“To create a global code would require sections on each AI technology and each cultural region. And this is without even considering the differences in industries, particularly contrasting regulated and non regulated businesses,” says Burgess.
The complexity of such a task goes without saying. But there is, however, a difference between having ethical AI and ensuring that AI is used in an ethical way. Although an all encompassing ethical code for AI itself might be unachievable, it’s possible to police the use of AI in concentrated spaces. A company or confederation, for example, could build certain protocols into their artificially intelligent systems so that certain actions or decisions were blocked. Organisations may have to accept that what they consider to be ethical AI is unlikely to be the universal consensus. They may also need to consider the potential impact of unethical AI or its use. Setting up safeguards and establishing internal standards could help to protect against unethical practice. If these internal standards are compatible with those of other organisations, then cooperative networks will gradually grow. What happens when these standards are broken is another matter entirely. Perhaps sanctions, along with the scorn of the world’s technological community, could be enough to discourage certain unethical practices.
While there is scope for the development of global guidelines, it looks as if they could never be truly universal. As is so often the case, the issue seems to be a lack of understanding. If organisations across the world could understand the implications of failing to meet a certain standard, then perhaps groups like the Partnership on AI would stand in better stead to make an impact. That said, we can’t even decide on a universal ethical code for humans. Perhaps we should be asking ourselves a different question. If creating an ethical code for AI is impossible, should we be creating AI at all?
Is it possible to create universal ethical codes for technology? Can the Partnership on AI deliver suitable guidelines for AI use? What safeguards could be put in place to protect organisations from ‘unethical’ AI? Share your thoughts.
You can read D/SRUPTION’s report on Artificial Intelligence here