Artificial Intelligence And The End of Crime

How the FBI is embracing emerging tech to target criminals

As Artificial Intelligence enters the consumer market in the form of chatty social robots and smart home devices, people are becoming less preoccupied with fears that AI is going to destroy the world. The technology is now applied in various different sectors to work alongside human employees, and one of these applications is helping to prevent crime. Machine learning systems make analytical predictions based on datasets, which makes them a useful tool for sifting through masses of information to find anomalies or draw conclusions. The accuracy of AI predictions has been contested, as has the ethics behind analysing personal data. However, organisations like the FBI and numerous businesses are investing in Artificial Intelligence in an attempt to prevent crime. Following the suggestion that machine learning could save banks alone $12 billion every year, it sounds like it’s worth it. But how exactly does AI fight crime, and will criminal activity change because of it?

AI is predictive by nature, using datasets to draw informed conclusions, and it’s this that makes AI such a good tool for crime prevention. Sift Science, a Y Combinator startup, just received $30 million in funding to use machine learning and AI to predict fraud. The company provides a customisable platform for websites (including Airbnb and match.com) that works out the likelihood of fraud in any given order using a 100-point scale. AI could also detect internal fraud, flagging up questionable expenses that could easily slip by a human employee. It’s not all about numbers though – AI has enabled the FBI to create a huge facial recognition database that includes approximately 50% of the population. The FBI can legally search millions of driver’s license photographs in at least half of America’s states. The bureau has supposedly used their growing collection to identify suspects, but the idea that government organisations are using these Big Brother-style databases is unsettling. Of course, it isn’t just faces that are picked out by machine learning systems. AI can also focus on other visual aspects like tattoos. Criminal gangs often use tattoos to make a statement of affiliation, and AI can identify these designs on potential offenders. But can police forces be certain that someone with a certain tattoo is really associated with a gang? Is it right to judge people based on their tattoo choices? These are just a couple of the questions thrown up by AI crime prevention. Others surround data protection and privacy concerns.

How is AI disrupting crime?
AI will make it much harder for criminals to get away with physical crime, which may encourage criminals to turn to cyberattacks where they can better avoid identification. As well as changing the nature of crime, machine learning will also effect how financial teams function within organisations. AI is always accompanied by the fear of unemployment, however a machine’s predictions are only useful if there are humans to respond to them. As facial recognition software becomes more commonplace, there’s also potential for disruption in security. Smartphones already use fingerprint identification, so why not faces too? Imagine using facial recognition to make purchases in store, for example – think contactless payments, without the card. Another use might be unlocking your house or car simply by looking at some kind of key screen. It’s these types of cameras and screens that will be able to recognise when an unauthorised person is trying to break in, as well as domestic AI assistants with security settings. This isn’t to say that physical crime will be completely eradicated, but it will become much harder.

The business angle:
From a business perspective, using AI to reduce crime in general is a good thing – unless you’re a criminal, of course. Detecting both external and internal fraud will save vast amounts of money, as will installing watchdog platforms like the one developed by Sift Science. Finance departments should upgrade their existing systems and get to grips with machine learning set-ups whilst they’re still relatively novel. When it comes to physical recognition software, businesses could benefit from having a closer knowledge of their customers. Retailers, for example, could use security cameras equipped with AI software to identify a criminal the second they step into the shop. Facial recognition doesn’t have to be confined to customers, though. Companies could use it to ensure that only staff members can get access to certain areas or information. This could be very attractive for businesses that like to keep shtum about their internal developments, and generally protect company assets. However, either way you look at it, storing someone’s physical appearance and then using it to identify them at any given time is more than a little creepy and raises concerns around personal liberty. Companies and organisations should be careful when it comes to using facial recognition software in case of accusations of misuse.

The use of AI in yet another important part of life is evidence for the continuing shift towards a society that accepts Artificial Intelligence and is not afraid to use it. By helping to predict and detect crime, AI could change the way that criminals operate, making it far more risky for them to commit physical offences like stealing and vandalism. However, if machine learning and AI is going to reduce crime, it will need vast amounts of data. The FBI already has a database of 117 million American faces, gradually helped along the way by state laws and regulations. If facial recognition is adopted by other organisations, then laws of conduct will have to follow. Data protection is a key issue in a world of mass information. As much as AI could help humans to reduce criminal activity, there’s only so much knowledge you want to give to a program without a conscience. . . AI systems might just become the most twisted cops out there.

Could your business benefit from AI crime prevention like facial recognition? Will other crime-fighting services try to replicate the FBI’s database? Will deterring physical crime lead to more cybercrime? What are the dangers of widespread AI surveillance? Share your thoughts and opinions.

Most Popular