And it’s doing it better than we are. . .
In May 2017, Google created an AI called AutoML. Using reinforcement learning and neural networks, AutoML was able to create a daughter AI called NASNet. NASNet’s purpose was to recognise objects in real time videos and it did so with 82.7 per cent accuracy. This exceeded the accuracy of any known system by 1.2 per cent, therefore making NASNet the most capable object recognition technology in existence. This, in turn, means that AutoML can create more accurate AI systems than we can. But what does this mean for the development of AI?
An Artificial Genesis
AI creating AI is amazing in terms of accelerating what the technology can do. Firstly, it will reduce the time taken to develop systems through automation. Normally, code is generated by humans, and this can be a lengthy process. Reduce human input, and you reduce unnecessary time expenditure. Plus this kind of assisted system also likely leads us to a place where you don’t necessarily have to be an expert coder to design and create an AI.
It’s also worth noting that the faster something can be made, the quicker it can be improved. Google researchers explain that the parent AI, which they refer to as a controller neural net, can propose a child model architecture which can then be trained and tested. As the parent AI gathers data about the child AI’s performance, it will supposedly get better at creating subsequent systems.
This exponential improvement could be key in taking AI to the next level in terms of both capabilities and adoption. The better AI is, the more it will be used to improve products, services and infrastructures. Apply this within a business context, and the benefits become even clearer. So, in many respects developers and businesses should look forward to the potential consumerism of AI-made AI. Of course, the concept of any technology making its own ‘children’ is quite unsettling especially when it’s Artificial Intelligence. What happens once AI can choose its own destiny? Is it possible to set up enough safeguards to ensure that parent AIs are closely controlled?
How disruptive might AI-made AI be?
AI-made AI could have a positive impact on the further development of technology by improving the way that systems are created. It could also contribute to the democratisation of AI, which has already begun thanks to the expansion of cloud connectivity. Democratisation has emerged as an ongoing trend within software applications, tying into the open source movement which supports collaboration. The more independent that software is, the more that people will be able to use it in their daily lives. This is most likely to involve personal assistance tasks such as scheduling meetings or finding specific information in digital datasets.
Businesses will also benefit hugely from this, automating inefficient processes without needing to employ coders or build a separate platform. The trade off will be data sharing and an increasing reliance on big tech companies in this case, Google. Businesses will have to decide if the convenience of ready made AI is worth it, and in the rush to take advantage of the technology, many presumably will. As useful as they may be, though, AI children like NASNet are likely to spark debate over trustworthiness. If team of human developers have not directly created a system, then how can they claim to be fully in control of it? The most disruptive impact of AI-made AI could be to facilitate the realisation of the singularity and the rise of an omnipotent parent AI that perhaps might not like sharing the planet with humans.
AI is better at many things than we are. Ironically, but not surprisingly, this now includes creating itself. On the one hand, this technological breakthrough could be instrumental in the expansion of artificially intelligent systems, supercharging its abilities across all possible applications. Parent AI could, in theory, be used by essentially anyone to create subsequent AIs for the completion of specific tasks.
A small business, for example, would be able to use data retrieval techniques that would have previously been unattainable. But there are important considerations to be made namely the avoidance of black box systems that operate in ways we don’t fully understand. When AI can create itself and not conform to human imposed rules, we might have a very real problem indeed.
How could your business make use of a bespoke AI system? Can coders protect against rogue AI families? Is AI creating AI a precursor to the singularity? Comment below with your thoughts.