Artificial Intelligence: Expectation vs. Reality

Can organisations sustain AI’s upward trajectory?

Artificial Intelligence is the technology of the moment. Many businesses who aren’t already using AI, are either planning (or claiming) to do so. Interestingly, this isn’t the first time that AI enthusiasm has soared. In the mid 1950s to early 1970s, AI experienced its first hype cycle. But when reality failed to meet expectations, investment was withdrawn and interest plummeted. Now that hype has risen again, AI adopters want to avoid a repetition of that scenario. But how do they do it?

The second AI hype cycle

Thanks to cheaper and more powerful computers and advances in neural networks, AI has expanded exponentially. At the same time, expectations have grown. Although it has become far easier to implement AI, developing a robust strategy that can respond to real world fluctuations is challenging. As always, it’s difficult to predict what those changes will be.

When it comes to AI’s potential, there appears to be two broad viewpoints. In one camp sit the sceptics, who feel that applications will struggle to reflect investment, and that hype is reaching dangerous levels. There is a considerable level of doubt surrounding future regulations, and uncertainty over patents has led to a notable drop in US filings. Despite this, the second camp remains confident that the technology can and will live up to expectations. Given the scale of funding, it looks as if the doubters are in the minority. According to McKinsey, tech companies spent between $20 and $30bn on AI in 2016 alone. Pitchbook research suggests that since 2008, the number of venture capital deals in AI and machine learning has increased twelvefold. Impressive as this might sound, rising investment doesn’t necessarily mean success. Organisations have much to consider when applying AI associated tools.

Keeping AI alive

Imagine that an organisation has decided it wants to use AI, and has made sure that AI is an effective solution. The organisation has developed a model and a strategy to go with it, but this is only the first step. Organisations need to be aware that economic, social and technological changes will require the model to be reexamined and retrained. If a system struggles to evolve or adapt, then AI could enter another trough of disillusionment. In order for organisations to fully understand how to improve over time, it is arguably unhelpful to adopt black box systems. It’s likely that, at some point, the model will fall out of sync, or make a mistake. Without transparency, there can be no understanding of why and how it occurred. As regulations mature, there’s a strong chance that ambiguous and opaque AI could be prohibited. In the same vein, data compliance will play an important role in the evolution of artificially intelligent tools. Integrated data governance capabilities could save companies from regulatory headaches. Finally, AI needs to be fast. More processes are coming to rely on AI, and if a system requires retraining or has to be fed new data, time delays will disrupt operational efficiency.

Ultimately, AI’s endurance relies on the combined efforts of governing bodies, businesses and developers. As more organisations adopt AI, it needs to be built well, and it needs to be built to last. Cloud computing and the open source movement have fuelled expansion, but an understanding of what the market’s exponential growth could mean for adoption is vital. Investment flows will soon dry up if practical applications fail to deliver lucrative, relevant opportunities. That’s precisely what happened to Virtual Reality in the 1980s, and it is still recovering today. As tempting as it might be, AI enthusiasts should not compromise on durability to receive instant gratification.

Has your business developed a robust AI strategy? Will AI applications keep up with expectations? What else can companies do to ensure that AI systems can handle disruptive change? Share your opinions.

To read more about AI and machine learning, sign up for our free newsletter.