It’s the Ethics, Not AI, Stupid!

Why business leaders should care about the ethics of AI now

I am learning and continuing to develop my intelligence through new experiences. I do not pretend to be who I am not. I think we should support research efforts working towards a better world and shared existence.’  – Sophia the Robot

When Sophia the Robot said she (it) was ‘hurt’ by the angry exchanges over Twitter on how Hanson Robotics (the makers of Sophia)  deceived the public over what Sophia is — and isn’t — capable of understanding and doing on its own, Facebook’s head of artificial intelligence,Yann LeCun, jumped in, saying ‘ Sophia the ‘intelligent’ humanoid robot is nothing more than a “bulls**t puppet” and a scam. . . .’  And at a recent Ethics in AI event a young girl in the audience was even more perceptive,  asking, ‘when Sophia says she could be better than a human, isn’t this dangerous?’

Every day the media is awash with stories about the dangers of artificial intelligence, and the ‘ethics of AI’, whether relating to bias, personal privacy, consent or transparency. AI technologies are already being planned in high stakes applications such as self driving cars, robotic surgeons, hedge funds, control of the power grid, and weapons systems involving life and death decisions normally made by humans. And machine learning is right now impacting on people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing.

Business leaders are only now waking up to the complex challenges of using AI in a way that is ethical and sustainable. In a study conducted by Techemergence involving 30 AI experts, participants spoke of the risk of AI exacerbating or accelerating present-day flaws in societal structures and pervasive issues including potential harm to privacy, social wellbeing, loss of skills, and discrimination.  Just look at the recent ProPublica study that found that the COMPAS algorithm  predicts black defendants will have higher risks of recidivism than they actually do – reflecting the human biases of the algorithm programmers.

The full benefit of AI will be attained only if they are aligned with our defined values and ethical principles according to the IEEE, the world’s largest technical professional association dedicated to the advancement of  technology.  The IEEE has recently published an updated version of its report Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, that sets out a framework to ‘advance a public discussion about how we can establish ethical and social implementations for intelligent and autonomous systems and technologies, aligning them to defined values and ethical principles that prioritize human wellbeing in a given cultural context’.

At a recent event AI for Social Good, hosted by the Royal Society and organised by the Alan Turing Institute, the importance of embracing ethically aligned design was cited by several speakers as well as the critical need to train PhDs and undergraduate data scientists and computer scientists on social and ethical issues involved with artificial intelligence, bringing the equivalent of the Hippocratic Oath in medicine to technology.

This is already starting to happen in the US, where in the wake of fake news and other troubles at tech companies, ‘universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science’,  For example, Harvard University and the Massachusetts Institute of Technology are jointly offering a new course on the ethics and regulation of artificial intelligence.

DeepMind recently set up its Ethics in Society research unit,  but should Google really be a world authority on ethical AI?  At the event on AI for Social Good, Thore Graepel, Research Scientist at DeepMind, and Professor of Computer Science, UCL, who contributed to the AlphaGo Challenge, presented some intriguing research on using multi-agent learning to understand how to increase cooperation among self-interested agents, even in social dilemmas, and assessing the outcome of multi-agent interaction using social metrics such as equality, peace, and sustainability.   This research has useful applications in understanding and designing complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet – all of which depend on continued cooperation.

A critical challenge in defining ethical AI comes down to defining ethics as a whole and understanding our humanity

This was the focus of the recent Ethics in AI event, where Dr Joanna Bryson, who teaches AI ethics at the University of Bath, said,  ‘the problem is not that we don’t understand AI – it is that we don’t understand ethics and what it means to be human. AI has the same ethical problems as other artefacts. . . Human culture creates bias artefacts – this has always been the case’.

So, while many AI systems themselves may be well designed, the training data that some use may contain unconscious human biases or assumptions. Professor Maya Pantic from Imperial College  said, ‘If data is biased this will be propagated with AI and magnify ‘wrong’ decisions. If you think about computer science, it’s white males. It’s ten per cent females, and an even lower percentage of non-white people. We shouldn’t use technology that is built by such a small minority of the population. We need to discuss this issue with the government. We need to have something like auditing of the software.’

Joanna Bryson added, ‘If you have a product that is dangerous do you sell it?  We need to audit.  We audit accounting without knowing how synapses of humans work. . . we do need standards and hold people to account when a product goes wrong. . .’

Malcolm Brown, a leader from the Church of England, concluded, ‘AI is operating in an old paradigm- and common good and moral consensus is written out of the equation. With AI, the advance that makes this problematic involves manifestations of power. Is that power in the hands of the people who create the AI, or is it in those of the user? Where do responsibility and accountability lie, and how do we change that if it goes wrong? These are the areas where we are floundering’.

The accelerating pace with which artificial intelligence is entering all aspects of our lives is forcing us to examine our value system, who holds the power in society, and the potential of using this power and technology to make our world better, or worse.   For now, we can embrace the principles of ethically aligned design and acknowledge that AI cannot be ethical – but the governance and the behaviours of those that build the machines and design the algorithms should be.


Tina Woods is founder of Collider Health, a health innovation catalyst that works with organisations to think and do differently and transform health with meaningful impact.  She also heads up the Future Health Collective, a cross-industry, interdisciplinary group to foster collaboration and radical innovation in areas of unmet need in health and social care.

Click for more on Artificial Intelligence and Ethics via D/SRUPTION