Artificial Intelligence – Ethics Essential

For AI, ethics is a core requirement, not an afterthought

As data pools get bigger and ever more complex, we increasingly use artificial intelligence to make sense of it all and inform decisions. That creates an ethical need to demonstrate those decisions are fair.

Street Bump, an app used by the City of Boston to extrapolate the state of roads from mobile phone data, ran into such ethical problems. The poorest people in the more run down parts of the city made the fewest trips, so collected the least data. That meant resources were naturally funnelled to wealthier areas where there was more information.

In this case the problem was identified and solved, but it’s easy to see how AI could create a big ethical dilemma.

But if ethical considerations in AI programmes are to be of use, they must be built in by design rather than bolted on.

That’s why we have three questions for organisations to ask themselves before implementing AI systems:

1. Can we be transparent about the use of AI with our stakeholders and customers?

You should plan to be upfront about your use of AI. If anyone’s uncomfortable with it, it’s easier to persuade someone to use it than it is to defend it. This is one of those times where it’s better to ask permission than forgiveness.

2. Would we be comfortable getting a human to do exactly the same task?

If a human was making a decision using the data available to the AI system, would you be comfortable with all of the possible outcomes? If the data is poor quality, inappropriate or incomplete, you can’t rely on individual judgement, whether it’s human or AI. If the system is only being used to allocate a loyalty card customer a hotel room, getting it wrong doesn’t really matter. But if the system is despatching emergency workers based solely on historic traffic flow data, with no access to current conditions, getting it wrong would be a lot worse.

3. Can we build the system so its decisions are auditable?

There should be a published auditing policy that has people keeping track of what the system’s doing. There are already stories about systems making logically correct but ethically poor decisions – like recommending white supremacists with neo-Nazi businesses.

Organisations making massive investments in AI, such as Facebook, have only recently discovered that investing in AI also means investing in people to keep an eye on it.

Ethical considerations like these were backed by the UK government in the 2017 Autumn Budget:

“The government will create a new Centre for Data Ethics and Innovation to enable and ensure safe, ethical and ground-breaking innovation in AI and data‑driven technologies.”

The world of ethics in the context of AI is complex

Even if we’re clear on what we mean by ethics, we may be less certain in a business context. . . Do we mean, AI should behave as ethically as an employee would? Do we mean AI should help us police our own ethics, perhaps even challenging our judgements? Or are we more focussed on using AI to drive down costs, come what may?

And as we find more uses for AI, there’ll be an expectation for it to work as ethically as a human. But how can that work across cultures? Saudi Arabia, for instance, has given citizenship to Sophia, an advanced robot with clever AI from Hanson Robotics. This may be exciting, but to us it raises thought provoking ethical dilemmas.

While we may be a long way from Bladerunner style androids, such extreme cases show the need to understand ethics in relation to AI and to bake it into the design of all future systems.

Alastair McAulay is an IT transformation expert at PA Consulting Group. For more information about their work in AI, please click on the following link:

https://www.paconsulting.com/insights/2017/artificial-intelligence-and-automation/