How your personal information affects what you see (or don’t) online
In 2016, ProPublica bought a Facebook ad for real estate that blocked users with an ‘affinity’ for African Americans, Asian Americans or Hispanics. This violated the federal Fair Housing Act, which prohibits discriminatory marketing campaigns. Facebook promised to tighten its controls, but a year later, they were found wanting. Again, ProPublica bought a number of rental housing ads that excluded certain demographics, including Jews, Spanish speakers and users who were interested in wheelchair ramps. Almost all of the ads were approved within minutes. The social network failed to uphold its pledge, adding another dimension to the data privacy debate. Not only are businesses gathering scary amounts of information – they’re being racially, economically and socially selective.
Facebook? More like Racebook. . .
Minorities have traditionally been rejected by real estate, and as Facebook is a hotspot for rental listings, it’s sadly unsurprising that the same attitudes have carried over into the digital sphere. Ami Vora, Facebook’s Vice President of Product Management, described the racist ads as a ‘failure’. Vora expressed Facebook’s disappointment that they had fallen short of their commitments, and attributed the slip up to a technical fault. But if the world’s largest social media site can’t reliably flag up discriminatory ads, who can? Unsurprisingly, Facebook aren’t the only tech giants with dubious anti-discrimination checks. Earlier this year, Buzzfeed found that it was possible to create ads with Google’s ad platform that directly targeted racist or bigoted people. Like Facebook, Google’s software should have recognised and consequently rejected them. Google later admitted that their system is not infallible, and offensive ad suggestions can sometimes occur. In the US, failing to comply with anti-discriminatory advertising laws can lead to hefty fines. As well as the Fair Housing Act, the US government also regulates adverts for credit and employment. Employers, for example, may want to attract a certain kind of potential employee and block others. It’s all too easy to imagine this happening on social media sites. Despite the existence of organisations and laws, this is clearly an ongoing problem – and Facebook is yet to face any repercussions.
The impact of discriminatory data
Technology certainly isn’t perfect. However, by allowing discriminatory advertisements to slip through the net, companies are opening an ethical can of worms. The sheer amount of personal data utilised by businesses is already worrying enough, but using it to restrict access could be dangerous. Even if a specific person is more likely to buy property over another, creating ads that focus on those buyers will make it even harder for minorities by blinding them to opportunities. The ability to send out targeted ads in an exclusive rather than inclusive way is increasing corporate power over consumer decisions. When it comes to digital marketing, a person’s demographic is already starting to define the products and services available to them. But if companies, products or services can completely exclude entire groups, then they essentially create an elite market based on favouritism. This could backfire on platform providers and the businesses that use them, driving consumers away by abusing their trust. For all that the tech community claims it is encouraging diversity, neglecting to control ads is doing exactly the opposite. Although there are regulatory bodies in place, more needs to be done to create clearer guidelines so that when a company fails to comply, action can be taken. The ad industry should expect to meet resistance yet again.
Although Facebook and Google should not be excused for breaking the law, discriminatory ads demonstrate how difficult it is to create ethical technology. Software does not have a moral code and works on a basis of logic, which means that it can easily miss the nuances of social protocol. This hardly bodes well for the future of AI in applications that require intelligent systems to make ethical decisions. Perhaps the most worrying thing about the acceptance of discriminatory ads is their control over our lives. By blocking certain groups from seeing ads for specific jobs, houses, or financial services, then businesses are actively encouraging discrimination. Without the recent investigations carried out, we would be none the wiser. Now, it’s up to powerful platforms and regulatory bodies to pull consumers out of what could quite easily become a moral and ethical nightmare.
Have targeted ads gone too far? Should Google and Facebook face repercussions for failing to catch racist advertisements? Is it possible to create ethical technology? Share your thoughts.