Artificial Intelligence – A Hacker’s New Best Friend?

From machine learning to machine lying

Body recognition technology is shaping up to be an important security tool. One day, you could pay for your groceries, unlock your house and access your most important personal information using your physical appearance. Retail bank Barclays has already rolled out voice recognition to customers, allowing them to verify their identity simply by speaking. Although machine learning techniques have added a personalised dimension to security protocols, they can also be used to compromise them. Although cybercriminals have yet to fully exploit the vulnerabilities of AI, it’s a possibility that both organisations and individuals should prepare for.

How could hackers use artificial intelligence to their advantage, and can we safeguard against it?

Proving once again that technology is not infallible, various research teams have shown that artificially intelligent software can be tricked to recognise – or rather, fail to recognise – certain information. This technique is known as an adversarial example, and it can be shockingly simplistic. Last year, researchers at Carnegie Melon University demonstrated that they could render themselves invisible to facial recognition tech by wearing a pair of oversized, gaudy glasses. What’s more, the team even managed to convince the software that they were famous celebrities by altering the patterns on the specs. While this initially comes across as amusing, it’s a clear indicator that machine learning techniques can be easily manipulated.

Adversarial examples seem to focus on physical, pictorial evidence, but voice recognition could also be at risk. A team of developers at MWR Labs has demonstrated exactly how to exploit vulnerabilities in Amazon Alexa, tricking the device into sending audio data to external programmers. This year, the loophole was fortunately removed. Hacking into Alexa was decidedly complicated, but altering an image is dangerously easy. In fact, adversarial examples can be printed out onto standard paper, photographed by a run of the mill smartphone and still cause algorithms to misclassify information. To the naked eye, the edit is undetectable. Imagine the havoc that would be wreaked if autonomous vehicles began ignoring the true meaning of road signs because someone had edited them. Although no conclusive cures have been found as of yet, developers and other technology stakeholders can analyse their system’s vulnerability to adversarial examples at cleverhans.io.

How threatening are adversarial examples?
Adversarial examples have clear disruptive implications for pattern recognition technology. If people can fool facial recognition software simply by wearing a pair of novelty glasses, this doesn’t bode well for the integration of AI into security systems. By tweaking the patterns that machine learning techniques can recognise, criminals could become completely invisible or impersonate others. This, as shown by the Amazon Alexa hack, is also an issue for voice recognition tech. In terms of the consumer market, vulnerabilities in pattern recognition could halt the adoption of face and voice tech. Given the potential for hackers to get hold of your personal possessions and data, perhaps this is a good thing.

The real spoils, though, are likely to come from big businesses and governing bodies. Adversarial examples could become a serious problem for organisations like the US government, which has a giant facial recognition database. Find a way to mess with facial recognition, and you make that information useless. In short, adversarial examples are a powerful security threat. This stretches anywhere between selfie pay at a coffee shop to gaining access to high security buildings. The ability to completely alter the ability of AI software indicates the continuing evolution of cybercrime, the unpredictability of machine learning systems, and of course the importance of not relying entirely on technology.

It looks like machine learning techniques are possibly becoming cybercriminals’ best friends. The more developers do to ensure security, the more opportunities there are for hackers to find loopholes. This doesn’t mean that body recognition technology won’t still become a key security tool in future. It’s unlikely that organisations are going to abandon pattern recognition tech, but they will need to make a conceited, co-operative effort to ensure that they are safe. So far, attempts to teach AI to recognise adversarial examples have been far from successful. However, there is a silver lining. In light of the fairly recent discovery that artificially intelligent platforms can be deceitful, at least humans can combat machine learning systems. One day, this could become a vital safeguard against ultra powerful AI.

Should businesses avoid using body recognition technology? Will adversarial examples negatively impact the adoption of pattern recognition software within security? Are adversarial examples a blessing in disguise? Comment below with your thoughts and experiences.