New ways of training AI will increase speed of learning & capabilities that might affect you
Possibly on of the most important parts of building an effective Artificial Intelligence is to feed it information from diverse data sources. Through exposure to labelled images, AI software can be gradually taught to distinguish between objects. This technique is called ‘supervised learning’, as the algorithm is spoon fed readily categorised information. The thing is, the vast majority of data isn’t labelled. This means that supervised learning is limited – and so are the algorithms that use it. But what if Artificial Intelligence could truly teach itself, without the labels and painstaking data input? How would this change our relationship with AI, and what real world problems could it solve. . . or create?
Smart gets smarter
It’s not surprising that pioneering AI startup DeepMind is leading research in this area. The company, which was acquired by Google in 2014 for a cool $600m, is working on a new technique called self supervised learning. Simply put, it does what it suggests by taking supervision out of human hands and into the algorithm’s own programming. By feeding the software with millions of video stills and one second audio clips, the AI can begin to associate visual objects with sounds. DeepMind states that the algorithm can identify sounds with 80 per cent accuracy. The study could be a vital step in the development of authentic, self teaching algorithms, and will be presented later this year at the International Conference on Computer Vision. Eventually, this technology could be used in the real world. We’re still a long way off from applying fully self teaching AI to everyday scenarios, but in the meantime the study could provide developers with the tools to build artificially intelligent platforms with even more autonomy.
What happens once AI can teach itself?
Supervised learning techniques have built artificially intelligent software that can provide in depth business analytics, predict consumer behaviour, translate different languages, read emotions, drive a car and, of course, play chess. Apply self supervised learning, however, and things get a whole lot more interesting. If AI can understand data that hasn’t been labelled by humans, then it can understand the entire world without limitations. An AI that can learn through YouTube and other mass media outlets could accumulate fine tuned knowledge about human emotions, as well as the way we communicate. AI would be able to perceive far more than it can today, adding context to datasets. In terms of real world applications, improving AI’s ability to make sense of scattered data has obvious advantages. For example, improving the quality and depth of consumer insights would be massively helpful for marketers. In retail, AI powered conversational interfaces could handle complicated and sensitive customer queries without human intervention. Extensive data analysis may also lead to more relevant products. In the financial sector, self supervised AI would provide more detailed predictions, based on countless sources. In HealthTech, health trackers and virtual doctors could account for patients’ emotional state, improving customer experience.
However, there’s a darker side to AI that always needs consideration. AI has already demonstrated that it can be deceptive. If algorithms understand cultural and social context, they may become incredibly apt at manipulating humans. Self teaching software could also be regarded as a distinct move towards the technological singularity, predicted by prominent technologist Ray Kurzweil. And, of course, the better AI is, the more likely it is to take your job.
From a business perspective, self supervised learning is a chance for AI to become as effective as possible without masses of attention. The more capabilities that algorithms have, the more useful they become to the companies that use them. But do we really want AI to be able to teach itself without our supervision? There’s merit in controlling the information that algorithms ingest. Super smart algorithms clearly have their uses, but developers need to retain a level of control. Without some limitations, AI may well begin to teach us a lesson.
Is supervised learning a safer alternative to self supervised learning? How else could the self supervised learning technique enhance algorithms? Is DeepMind’s study a step towards technological singularity? Share your thoughts and opinions.