What’s Up With Google’s AI Policy?

Why Google has released a policy on Artificial Intelligence – and what it means for the future

“Don’t be evil.” That was, until recently, the unofficial motto of Google – a phrase which featured in the company’s code of conduct since 2000. Back then, when Google was a trendy, up and coming tech company, the tongue in cheek statement served as a playful reflection of anti corporate sentiment. However, given that the company has grown into a world dominating tech giant, it is seems that this glib motto now falls wide of the mark.

It makes sense, then, that Google removed the statement in April. However, there could be more at work here than the simple desire to appear more professional. In March, news spread that Google was working with the US Defense Department to develop artificial intelligence for the analysis of drone footage. Firm in the belief that the company has no place in warfare, several Google employees resigned, and an internal petition was created calling for the CEO to cancel the project immediately. Their protest was a success, and the contract eventually terminated by Google.

Now, the company has complied with another request from the petition, to ‘draft, publicise, and enforce a clear policy’ to never build warfare technology. This came in the form of a wider statement on Google’s development and use of AI.

The seven principles of AI

On 7th June, Google CEO Sundar Pichai released a blog post laying out seven principles which will guide the company’s use of AI in the future. These objectives state that AI should:

1) Be socially beneficial
2) Avoid creating or reinforcing unfair bias
3) Be built and tested for safety
4) Be accountable to people
5) Incorporate privacy design principles
6) Uphold high standards of scientific excellence
7) Be made available for uses that accord with these principles.

In addition, Google promised that it would not pursue ‘technologies that cause or are likely to cause overall harm,’ weapons, or technologies whose purpose infringes internationally accepted norms, laws and principles on human rights.

This blog post clearly does a lot more to set out Google’s position on AI than a brash claim to reject evil. The company’s direct acknowledgement of the dangers and ethical implications of this powerful technology are a step in the right direction towards a regulated, responsible AI industry. In fact, Pichai states in his post that the principles ‘are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.’ That makes everything clear, doesn’t it?

Castles in the AI sky

Unfortunately, things aren’t quite that simple. Whilst Google might claim that these principles are comprehensive and binding, it is questionable whether there really is a way of developing a concrete code of ethics for AI. Statements and guiding principles like this will always be open to interpretation, and it would be easy enough for the tech giant to revoke these assertions in future with little or no repercussions – as often seems to be the way in the technology world.

Google’s blog post and AI principles therefore shouldn’t be taken at face value. We asked our D/SRUPTION AI experts for their thoughts on this development, with George Zarkadakis, Calum Chace, Marijn van der Pas, Terence Mauri and Andrew Burgess casting a critical eye over Google’s new AI statement.

What do the experts think?

For George Zarkadakis and Calum Chace, the move from Google is a positive step forward – a clear reaction against public outcry, and a resolution to make changes for the future.

As George Zarkadakis states, “The principles are a result of the recent incident with Google pulling out of the Pentagon contract due to employee discontent, as well as the heating up of public debate around the use and abuse of AI more generally. These events have been strong indicators that Google, as a global AI leader, ought to take a clear position. I personally think that these principles are a good way forward.”

“The post recognises that for AI to become socially acceptable it must abide by social norms and aspirations,” he continues. “Google’s mission statement aims to resolve the AI vs. humans debate in favour of humans remaining in control. There are many voices that call for an international treaty of AI ethics. However, I believe that the best route is via industry leaders self-regulating, at least in the areas of ethical use and particularly in accountability and avoidance of bias.”

Author and speaker on AI, Calum Chace, agrees: “No doubt part of the reason for this statement is the flak that the tech industry is getting over privacy and fake news,’ he notes. ‘It’s great that Google acknowledges the serious risks in the technologies that it is developing. There is something missing though – the acknowledgement that technological unemployment is a real possibility – not in the next 5 to 10 years, probably, but in the next 15 to 30. It is irresponsible of them to deny that it could happen, as that prevents policy makers from drawing up contingency plans.”

Andrew Burgess, on the other hand, a strategic advisor on AI, is not impressed. “I have to take a very cynical view of this blog post,’ he states. ‘It says very little of substance and can easily be interpreted in many different ways. The actual AI principles are, in my opinion, vacuous and meaningless. The exclusion list is equally empty of real commitments.”

Is it possible that we are looking at Google in the wrong light with respect to this AI statement? For Burgess, it is Google’s status as a seller of adverts which should inform the way we analyse its AI operations.

“The first thing to remember is that Google’s customers are not people like you and me,’ he says. ‘Google’s customers are the advertisers. Most of the AI efforts that Google work on will be focused on presenting the most appropriate adverts to people using its various platforms, and maximising click-throughs. When Google claims that they ‘use AI to make products more useful’ – what they actually mean is that they use AI to make their products more addictive and to be able to target ads better.”

If Google’s AI principles conflict with their corporate interests in advertising, then, this could spell trouble ahead.

Are Google’s AI objectives enforceable?

It’s all very well for Google to set out certain intentions around AI, but if these principles aren’t enforceable then the value of such a statement is questionable. Do our experts think that Google’s blog post actually means anything, and will effect concrete change?

For Global Disruption Expert Terence Mauri, Google’s AI objectives do matter as they are a roadmap for the company to hold itself accountable to. They are also important for Google’s corporate image:

“The statement sends out a positive message of intent to Google employees that the company is doing the right thing, and to the media that they are a ‘good’ company that will not develop AI for use in weapons,” he remarks. “However, the new AI ethics policy doesn’t spell an end to Google’s involvement with the Pentagon, and it’s not legally enforceable. Whether Google can prevent its technologies from being weaponised (or even if that’s wrong) is still up for debate.”

Marijn van der Pas, a campaigner at Full AI, notes that Google’s track record on AI ethics isn’t exactly sparkling.

“After four years, Google still hasn’t disclosed the identities of the outside experts on the ethical review board of its DeepMind unit,” he says. “Though there is room for improvement of Google’s new ‘concrete standards’ as set out in the blog post, we first and foremost would like to hear which body will oversee whether Google upholds its own standards and whether it will be transparent on such a body’s conclusions.”

“What’s more,” he adds, “Google’s AI services should not only be accountable to people (as set out in objective 4), its AI should in all cases be able to explain why it made a certain decision.”

Google has a questionable history in AI ethics, then, and it is not clear that the new AI policy will set this straight. Much like van der Pas, Andrew Burgess remains highly unconvinced of Google’s intentions:

“In the blog post,” he says, “Google talk about the ‘overall likely benefits’ of AI. But who defines these? This could simply mean profit. Further, when they design their AI systems to be ‘appropriately cautious,’ how cautious is this? Who defines this? (Spoiler alert: Google does). Lastly, they are also only going to test their AI products in ‘appropriate cases.’ Surely all AI systems should be tested thoroughly?”

It seems that the new AI policy might not really put significant limitations on Google’s AI projects after all.

Google: “we are not developing AI for use in weapons”

It must be said that Google’s decision not to pursue certain kinds of AI applications does provide an added dimension to the discussion, as it is easier for the company to hold itself to this kind of promise. However, although they won’t actively develop AI for use in weapons, Google has said that they will continue to work with the military in ‘many other areas.’

Furthermore, whilst Google might never actively develop weapons themselves, their technology could arguably find its way into such applications in the future. These are the unintended or secondary consequences of technology. For Andrew Burgess, this is cause for another serious critique of the blog post:

“According to their statement Google will still be able to create AIs whose secondary purpose or implementation is to cause or indirectly facilitate injury to people. But consider this: does the software on a self-guided missile have a primary purpose to facilitate injury? Or is it just guiding the missile, and it is the hardware whose primary purpose is to facilitate harm?”

After the combined scrutiny of D/SRUPTION’s experts, Google’s firm assertions have ended up looking decidedly vaguer.

A closing statement on AI

So, what does Google’s AI statement really mean? In spite of its grandiose status as a new, ethical corporate policy, the release of these AI objectives was clearly primarily designed to appease furious Google employees. This, in itself, is interesting. For as much as we talk of all powerful technology companies such as Amazon, Facebook and Google, these events serve as a stark reminder that it is technology workers who hold the future of our world in their hands.

As for Google’s AI principles more specifically? Whilst they might be well intentioned, they are highly open to interpretation and probably won’t make too much of a difference to Google’s operations.

Significantly, they might also be insufficient in a completely different way. The last word goes to Andrew Burgess.

“It frustrates me that Google think they can get away with such meaningless twaddle,” he says. “Before Google start even thinking about creating some clever voice AI that makes restaurant bookings they should make sure that there is zero hate speech and abusive images across all of their platforms. Saying that is too big a task is simply not an excuse.”

Given the reach and influence of Google and its services, this is a very good point.

For more insights into all things AI, join us here.