Regulating Facial Recognition

Facial recognition is part of our everyday lives, but it’s yet to face regulatory controls

If you’ve accepted a suggested tag for a photo on social media or used a Snapchat filter, then you’ve used facial recognition. But these relatively rudimentary applications only scratch the surface – facial recognition is already used to unlock personal devices, make payments, and even identify criminals. Due to better cameras, sensors and machine learning capabilities, facial recognition is more accurate and, as a result, arguably more invasive than ever before.

The question is, to what extent should organisations be able to use facial recognition, and what rights do people have to stop them? At the moment, there are no answers. Thankfully, organisations are realising that without regulation, facial recognition heralds an ethical catastrophe.

The many faces of facial recognition

There are certainly positive implications for facial recognition, from sending relevant targeted ads to verifying transactions. In time, it is likely to be used for authentication on a mass scale. In future, facial recognition could notify the blind and visually impaired when a family member enters the room, or pick out a missing child in a crowd.

However, look at things from a political, criminal justice or security perspective and the potential problems become clear. In what some nervous commentators might describe as ‘Orwellian’, the FBI already has a facial database containing photographs of around half of all adult Americans.

So what’s the problem? Imagine you’re at a political rally, unaware that your face has been logged by the government. Not only is this a violation of your privacy rights, but it could have an impact further down the line. Your involvement at that rally, regardless of whether your views change, could bar you from certain jobs or privileges. We don’t expect that to happen in a functional democracy, but a more stringent political regime could easily abuse this power.

Tech giants and… Taylor Swift

Outside of the public sector, technology companies have unprecedented access to personal photos, videos, and cameras. In fact, any company that stores facial data can do more or less whatever it wants with it. If your face is recognised by a machine learning algorithm as you enter a shop, you might not even realise it. There are endless questions surrounding where facial data is stored, what it means for you, and what it could mean for you in the future.

Even famous artists – namely American pop icon Taylor Swift – have been criticised for unethical use of the technology. Last year, Swift used facial recognition tech at a concert to spot stalkers in the audience. Unfortunately, while the technology provided another security layer for Swift, it compromised the privacy of her fans.

A foundation for facial recognition

Without regulation, facial recognition is a ticking time bomb. Eventually, your face could determine what access you have to certain products and services, or enable unknown third parties to keep watch over your movements. This is obviously a problem, and exemplifies why data regulations are necessary. Our faces are yet another data point that can be used for good and for bad, depending on who can access the information. But where should regulators begin?

According to the AI Now Institute‘s 2018 AI Now Report, facial recognition is a central challenge for policy makers. Consumers need to be more informed about when and how facial recognition is used, and should have to option to reject it.

In a detailed blog post, Microsoft president Brad Smith called for improved facial recognition technology, reduced bias, corporate contribution to policy debates, and more openness about development. Smith went on to suggest that Silicon Valley’s ‘move fast and break things’ mantra is perhaps not the best approach. Given that bias has already cast a shadow over facial recognition applications, these are wise words.

Recognising responsibility

Who is responsible for regulatory change? In Smith’s view, technology companies occupy a pivotal role. However, their efforts can only go so far.

“As a general principle, it seems more sensible to ask an elected government to regulate companies,” he writes. “Such an approach is also likely to be far more effective in meeting public goals. After all, even if one or several tech companies alter their practices, problems will remain if others do not.”

While tech companies should shoulder some of the responsibility, the support of elected representatives is needed to form a common regulatory framework. But, of course, the government can’t act alone. Microsoft advocates the creation of a ‘bipartisan expert commission’ informed by academic, public and private involvement.

Whichever regulatory body steps up to the plate, they need to act fast. Facial recognition is already used in major airports, retail stores, criminal justice systems, and across the digital realm. To save face, businesses should prepare for an onslaught of regulations to tighten consumer control and restrict unethical practice. At the same time, technology companies must work to beat the bias in machine learning algorithms and improve accuracy levels. With 2019 touted as the year of facial recognition, it’s time to make some serious headway towards these goals.

To find out more, sign up for our free weekly newsletter here.