Politics and democracy undermined by the big tech companies
The development of Artificial Intelligence has been met with mixed press. On the one hand, it’s incredibly useful technology, but on the other, it could bring about the demise of humankind. Scientists and technologists alike have discussed the potential dangers of giving AI too much power, and have suggested ways to control rogue platforms. Some think that it’s simply too late, and Artificial Intelligence is an unstoppable force that everyone should be wary of. By late 2016, it wasn’t just a handful of individuals that were clearly concerned about the rate of expansion. On the 28th of September, Amazon, Facebook, Google, Microsoft and IBM announced the creation of the Partnership on Artificial Intelligence to Benefit People and Society – The Partnership on AI, for short. At the beginning of this year, Apple joined the prestigious list as a founding member. According to the partnership’s website, the collaboration was set up to study and formulate the best practises concerning AI, and to exist as an open platform for discussion.
Can we trust the Partnership on AI?
Whilst the partnership looks – and indeed claims to be – benevolent, you’d be forgiven for thinking that there’s something unsettling about six Silicon Valley giants taking effective control of AI regulations. Elon Musk, for example, set up his own organisation called OpenAI, hinting at possible suspicions. However, in November 2016, OpenAI partnered with Partnership founders Microsoft. It certainly looks like tech leaders are co-operating, but the issue here is trust – businesses are trained to expect the unexpected from their competitors whether they’re working together or not. In the same vein, consumers will never really know if companies are actually being transparent. The Partnership might claim to support best practises – but this, of course, refers to the practises that they themselves consider to be best. It’s great news for everyone if the organisation really is committed to safe-guarding against AI, but when it comes to business, corporations are inherently selfish. Even so, if anything is going to encourage them to work together, it’s the possibility that AI could threaten humanity as a species.
How could the Partnership disrupt development?
If the aims of the Partnership reflect a genuine effort to keep AI under check, it could be a helpful (and even necessary) collaboration. However, if an exclusive AI ‘cartel’ were to come into being, other companies and society in general would have a lot to worry about. For example, members could stunt the development of other companies whilst advancing their own, creating hugely powerful AI. The Partnership also has implications for official governing bodies. If big businesses are so influential that they can set up their own regulatory bodies, where do governments come in? The support of large companies has always been a vital political tool, but are we moving towards a political landscape dominated by these firms? At their core, Amazon, Apple, Facebook, Google, IBM and Microsoft are all data companies. . . just imagine the power of an artificially intelligent platform that could access info from all six.
In an extreme but not impossible case, the neural networks developed by the tech giants could decide to disregard their programming and, colloquially speaking, chat amongst themselves. This isn’t far-fetched paranoia – late last year, Google Brain (one of Alphabet’s AI teams) created two AIs that could successfully hide their conversations from another AI. If they can hide their interactions from other networks, they can hide them from humans too.
It’s too soon to say if we’re looking at an AI cartel, but it’s worth taking the Partnership with a pinch of salt. It’s not just the companies that should be questioned, either – if neural networks can communicate secretly then we have a lot more to worry about than technological monopolies. It’s also concerning that tech giants have the resources and influence to create their own independent, regulatory organisations. Who will keep these companies in check if they hold all the power? Either way, the impact of the Partnership (and others like it) will be nothing compared to the disruption caused by Artificial Intelligence that can lie, act without programming and communicate behind the backs of its creators. That, simply put, is very scary.
Do you think that the Partnership is genuinely committed to AI safeguards? Is rogue AI an unavoidable reality? Do regulatory bodies set up by big businesses threaten governmental organisations? Comment below with your thoughts and opinions.