Is Machine learning the problem or the solution?
Machine learning a subset of AI began to grow in the 1990s and has now become an integral part of many business strategies. Fuelled by an influx of data and unlimited storage it’s been used to detect fraud, play (and win) games and enhance search engines. Through machine learning companies can track and respond to the personal preferences of individuals, theoretically giving the consumer more of what they want. It’s easy to see how important this is for marketers and business strategists, but there’s a less obvious benefactor. Within the media, data analysis is helping publications and websites to work out what information their users want to see, but as Facebook found recently, it can easily backfire. Machine learning is undoubtedly a valuable tool for editorial teams, but it’s also viewed as a threat to human journalism. How is machine learning used in the media, and how do companies make sure that it enhances rather than threatens editorial teams?
Applying machine learning in the media
Machine learning is being applied in the media in a number of ways. One area that has seen huge growth is around the personalisation of content – the analysis of data to make conclusions and predictions. This is particularly useful for a publication or platform looking to expand its audience, as data analysis helps analytics teams work out which content is most successful. This information is then used to send recommendations to individual users based on data collected from previous searches. In short, machine learning creates tailored content for users based on their personal interests.
The end of serendipitous discovery
Initially, this sounds great, as the sites we frequently use sift through countless content to flag up the news and features we’re most likely to care about. The danger is however that we create an echo chamber, thus removing serendipitous discovery and therefore obscuring the truth of the stories. For example, in August 2016, Facebook replaced its human editors with an algorithm that quickly began to cause problems. Instead of delivering quality content, it began to suggest questionable – and even completely fake – news stories. As a growing number of people currently receive their daily news injection from social media sites, this is a massive problem. Instead of showing relevant news, the Facebook algorithm was promoting misinformation. Now, Zuckerberg is reaching out to third-party fact checkers to address the issue.
Disrupting traditional media
Machine learning has disrupted traditional media by enabling personalisation. Now, the news stories you read are based on your own individual preferences, which ironically narrows the scope of information. Social media platforms now have to accept their role as publishers, because that’s where most people find out about current events. This isn’t just through ‘Trending’ sections – if something news-worthy happens, then there will probably be a new profile picture banner for it. In the long run, news will continue to be personalised by data analysis, so in order to get full unbiased stories people may have to look elsewhere. Machine learning has also led to concerns over the demise of human journalism. Facebook did nothing to quell these fears by firing its editors. However, as much as data analysis and deep learning has disrupted traditional media distribution, machine learning still relies on human judgement to work effectively.
How can media companies embrace machine learning?
Despite fears that machine learning could threaten editorial jobs, online social platform Whisper delivers content developed using deep learning and data mining techniques to over 30 million monthly users. They’re not the only ones – other publications and sites have relied on machine learning to create personalised content experiences. One example is The New York Times, which uses data science to understand their readership. The reason that these media companies have been successful in applying machine learning is because they didn’t simply fire their human editors. Machine learning is used in conjunction with human teams.
Whilst machine learning has been perceived as a threat, it’s clear that there’s some real merit to using it within the media. The recent Facebook fiasco highlights the need to involve human judgement in distributing information, demonstrating that machine learning is a useful tool but only when used in moderation. Even when controlled by human editorial teams, data analysis and deep learning have disrupted traditional media distribution by essentially killing off serendipitous discovery. Now that social media sites deliver the latest news updates by flagging up trends, there’s very little need to go anywhere else for information. However, personalised news is incredibly selective. Editorial teams need to find a balance between reaping the benefits of targeted stories whilst also making sure that their content strategy isn’t restrictive. The challenge for publishers (including Twitter, Facebook and LinkedIn) is to tweak algorithms and promote accurate information… For now, it looks like human editors are keeping their jobs.
Can we rely on social media sites to promote quality news stories? Will machine learning contribute to the death of traditional journalism? Let us know your thoughts.