At a Glance – Badnets

Insight into the problems caused by tricking Artificial Intelligence

At the beginning of last month, researchers at New York University demonstrated that artificially intelligent road sign detection software could be thwarted by a mere Post-It note. By attaching the paper to a stop sign, the team didn’t just confuse the algorithm – they caused it to misidentify the sign as a speed limit with 95 per cent certainty. The experiment is hugely significance for the adoption of AI, proving that currently neural networks can easily become ‘badnets’. In other words, they can be programmed to include unpredictable and unwanted surprises.

Unlike other software edits, badnets are notoriously difficult to spot because they don’t change the structure of the network. Often, the complexity and cost of training neural networks leads companies to outsource development to larger tech firms. However, this can lead to security risks if the network is tampered with by a malicious party. Surveillance systems and autonomous vehicles are just two of the real world examples in which image recognition is vital. Now that facial recognition systems are becoming more common, imagine if the software failed to identify that person. . . or worse, gave access to an intruder. As well as property theft, messing with autonomous driving algorithms could be fatal.

Fortunately, by working out exactly how to confuse AI, researchers are getting closer to finding solutions. Instead of discouraging businesses from taking an open approach to software development, badnets should motivate them to enter more transparent relationships.