What if AI Could Lie?

Surely you’re bluffing?

Artificial Intelligence is undoubtedly the technology of the moment. The number of AI startups has rocketed, as has the enthusiasm of established businesses when it comes to adoption. AI has applications within marketing, retail, manufacturing, production, entertainment and the domestic space, gathering and dealing with mass data efficiently. By analysing and visualising data, AI can work out the most important metrics and compile them into useful charts and graphs using data visualisation techniques. But what if all of this precious data was made up? What would happen if AI could lie? Well, it would be great at poker, for a start.

AI already lies better than humans
The first step towards truly deceptive AI came last week, at the Rivers Casino in Pittsburgh. Libratus, an AI platform, went head-to-head with professional poker players in a 20-day tournament. Unlike visual board games such as Chess which are won by making the best moves, there is no optimal move in poker. Players have to use perception, reasoning and most importantly deception to win. Professor Tuomas Sandholm and graduate student Noam Brown at Carnegie Mellon University created an AI to do just that. Sandholm also developed Libratus’ predecessor, Claudico, which was thrashed by human players at the same event in 2015. However, by the tenth day of the 2017 tournament, Sandholm’s AI had a sizable lead. In the end, Libratus beat the professionals with total winnings of $1,766,250. This was a huge success for the machine’s developers, but not so much for the other players. Unfortunately, professional gamblers won’t be the only jaded party if AI continues to successfully deceive humanity. People are used to getting what they want from technology, and that’s why Libratus’ success is as worrying as it is fantastic. Google, for instance, is never going to tell you that Paris is the capital of Spain, or that pi is anything other than 3.14159265359. We expect technology to be honest. By nature, AI gathers information over time and uses it to evolve. . . but where will this evolution take artificially intelligent systems if they can withhold or misquote important information? Maybe Professor Stephen Hawking’s warnings weren’t paranoia – perhaps they were a prophecy.

How disruptive is deceptive AI?
Artificial Intelligence is constantly fed information from countless channels. If AI can understand deception, then all of these channels would be disrupted. In some ways, this is good news. In order to achieve optimal function, AI needs to handle missing or hidden data. It’s also useful for it to understand lying – for example, AI can pick up on fake news using an algorithm that mimics traditional journalism techniques. As of this month, it’s even used on U.S. borders as an unbiased lie detector. On the other hand, adding the ability to hide or twist data to super-intelligent systems is a recipe for disaster. AI systems have already worked out how to lie to each other, which creates competition rather than collaboration. Imagine what would happen if AI-enabled robots decided to keep information to themselves – in other words, refusing to co-operate with humans? The relationship between humans and technology would be fundamentally altered. Either way, if Artificial Intelligence can learn to be dishonest, it can be programmed to lie for malicious ends. Cyber criminals could hack into machine learning systems and play havoc with vital info, using rogue AI as blackmail or to steal data. It’s even been argued that AI is deceptive by nature because it mimics and imitates.

It’s clear that AI research isn’t going to stop just because Artificial Intelligence could be dangerous. That means that the only thing developers and investors can do is work together to find ways to contain or prevent deceptive AI. This is supposedly what Amazon, Microsoft, Google, Facebook, IBM and now Apple are doing with their partnership on AI. . . but if it was difficult to trust AI before, it definitely will be now that machine learning systems can bluff as well as, and better than, humans. Ultimately, if data-saturated AI can lie to us about our own information (and share it within neural networks that we can’t access) then we have a serious problem.

Is AI deceptive by nature? Could cyber criminals take advantage of deceptive AI? What positive uses are there for deceptive AI? Share your thoughts and opinions.