Apocalyptic Visions of AI are Diverting Attention from Policy Issues

Artificial Intelligence industry is talking about big red buttons, not necessary regulation

A guest post by Marijn van der Pas

After decades of unfulfilled promises artificial intelligence (AI) has started to deliver. Not hindered by rules and regulations intelligent machines are now exceeding humans in areas such as image recognition. With some of us this raises worries about killer robots. Consciously or subconsciously the tech industry is feeding these apocalyptic visions in order to divert attention from real issues surrounding AI.

Prominent science and technology luminaries such as Bill Gates and Elon Musk have warned several times that self-aware, super intelligent artificial systems will advance to a point at which they could become a threat to human existence. Musk and Gates are closely linked to some of the largest players in the field of artificial intelligence.

In recent years not a single large developer of artificial intelligence has gone to any length to debunk this kind of apocalyptic prophecy on AI. Meanwhile some technology leaders have gone as far as to predict precisely when technological singularity with all of its sci-fi horror will occur. And just to be sure Google has called for a big red button if all goes wrong with artificial intelligence.

These types of visions are diverting attention from the large impact artificial intelligence already has on society, both positive and negative. AI is making stock market investment decisions, guiding healthcare treatments, deciding on who gets a loan or not and it is flagging which employees are underperforming.

Rules and regulations highly needed

With machine intelligence developing at an ever increasing pace it is clear rules and regulations in order to secure human welfare are highly needed. The industry lobby for the technology sector in the United States, the American Association for the Advancement of Science (AAAS), is aware of this need: “With appropriate policies in place, robots should become our “best friends,” not our “worst nightmare,” experts said at its Forum in April last year.

Despite this awareness it is no surprise technology companies are weary of policies and regulations as this tends to affect their businesses and profitability. What is more worrying though is that prominent AI players have united in Partnership on AI, an NGO-type of organisation to build trust in AI and to convince the world they are developing machine intelligence in a responsible way.

Governments and regulators should wonder why Apple, Amazon, Facebook, Google/DeepMind, IBM and Microsoft have set up this partnership to “provide guidance on emerging issues related to the impact of AI on society” while these companies themselves are the main drivers of this technology, and therefore are best suited to safeguard ethical development of this technology.

Instead of merely providing guidance, Apple, Amazon, Facebook, Google/DeepMind, IBM and Microsoft could work to draft intelligent rules and regulations which leave ample room for the further development of AI while at the same timing making sure this technology will serve humans in their needs.

Keeping ethical AI promises

For a start it would help if these companies would keep their past promises on the subject of ethics. Unfortunately up to date Google has failed to keep promises in this field. The technology company agreed to set up an ethics and safety board when it acquired AI company DeepMind three years ago. The board was to ensure the technology would not be abused.

One of the acquisition’s conditions set by DeepMind’s founders was that Google would create an AI ethics board. As part of the acquisition deal the founders also stipulated that “no technology coming out of DeepMind will be used for military or intelligence purposes.”

In recent years DeepMind has publicly confirmed setting up the ethics and safety board. Still Google and its subsidiary refuse to detail who is on the board. British newspaper the Guardian has reported it “has asked DeepMind and Google multiple times since the acquisition on 26 January 2014 for transparency around the board”.

January last year the paper received its only answer by DeepMind: “There hasn’t really been anything major yet that would warrant announcing in any way. But in the future we may well talk about those things more publicly,” the Guardian quotes DeepMind chief executive Demis Hassabis as saying.

A call for regulations

While business decision makers are calling for stringent ethical standards for AI, experts are calling for regulations. The necessity is for instance identified in a recent survey held by the Pew Research Center and Elon University which involved 1,302 technology experts, scholars, corporate practitioners and government leaders.

The study quotes technologist Anil Dash saying: “The best parts of algorithmic influence will make life better for many people, but the worst excesses will truly harm the most marginalised in unpredictable ways. We’ll need both industry reform within the technology companies creating these systems and far more savvy regulatory regimes to handle the complex challenges that arise.”

Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University, in the study notes: “We already have had early indicators of the difficulties with algorithmic decision-making, namely credit scores. Their computation is opaque and they were then used for all kinds of purposes far removed from making loans, such as employment decisions or segmenting customers for different treatment.”

He continues: these kinds of credit scores “leak lots of private information and are disclosed, by intent or negligence, to entities that do not act in the best interest of the consumer. Correcting data is difficult and time-consuming, and thus unlikely to be available to individuals with limited resources. It is unclear how the proposed algorithms address these well-known problems, given that they are often subject to no regulations whatsoever.”

FDA for Algorithms

In the study researcher Andrew Tutt calls for an “FDA for Algorithms” noting “the rise of increasingly complex algorithms calls for critical thought about how to best prevent, deter and compensate for the harms that they cause. . . Algorithmic regulation will require federal uniformity, expert judgment, political independence and pre-market review to prevent – without stifling innovation – the introduction of unacceptably dangerous algorithms into the market.”

Governments are there for wise to start regulatory initiatives on AI soon. In October, the US National Science and Technology Council released a report on Preparing for the Future of Artificial Intelligence together with a National Artificial Intelligence Research and Development Strategic Plan. Earlier this year the Committee on Legal Affairs of the European Parliament released a draft Report with recommendations to the Commission on Civil Law Rules on Robotics.

With self-driving cars already taking to the road in most countries, progress on drafting actual regulations is sluggish. Late last year the science and technology committee of the British parliament stated that its government has been slow to recognise both the opportunities and threats posed by artificial intelligence. The UK should establish a commission on AI to provide global leadership on its social, legal and ethical implications, according to the committee.

At this moment, the AI industry leaders’ Partnership on AI is turning out to be more of a marketing tool to convince people that artificial intelligence really will be human-friendly than a sound attempt to secure the right policies and regulations for AI are in place. The best way to work on acceptance and trust in artificial intelligence to do just that.

Marijn van der Pas works on human rights and sustainability issues. He is a board member of the Dutch grassroots nature festival Fête de la Nature and a campaigner for the ethical Artificial Intelligence NGO FullAI.