AI: The Silent Revolution
Don’t think of it as when or if – the artificial intelligence revolution has already started
D/SRUPTION’s expert on AI and RPA, Andrew Burgess, explains why we all need to know more about the changes that have already affected our daily lives…
Most people still think of Artificial Intelligence as a distant threat, a baffling technology that will eventually take all of our jobs away and make all of our decisions for us. But the truth is that AI is already with us. It is used by each of us on a daily basis, usually without us realising.
In this ‘silent revolution,’ AI is already influencing our lives in ways we just don’t appreciate. Some of these have been very useful to us (satellite navigation, fraud detection, cancer detection), while others have been fatuous and benign (organising our social media feeds, Snapchat masks). There are also uses that can, in the wrong hands, actually be dangerous. By that, I mean things such as facial recognition, profiling and voice copying.
All this is before you even get to the higher terror of AI becoming cleverer than humans and eventually taking over the world, a fear that’s become the staple of movie plots such as Ex Machina or the Terminator films. I’m not even going to consider the distant threat of the Singularity here since it’s questionable that it’s ever going to be achievable. Instead, I’m going to focus on what is happening right now and why we need a better understanding of it if we are going to make AI a force for good in the world.
Of course, AI is inherently very complicated. It includes many different types of technologies, from image recognition and data clustering, to natural language processing and prediction. It requires very specific skills to design and build. Most data scientists have PhDs, which means that not only do they earn very good salaries, but there are very few of them around. By one estimate there are only 22,000 AI researchers with PhDs in the whole world.
So many of society’s biggest hopes and businesses’ biggest bets depend on a technology that’s currently in the overworked hands of just 0.0003% of the world’s population. This is where it starts to get worrying but I would like to propose that there is something we can do about it. Actually, I think there is something that we all have to do, and that’s for us, as daily users and consumers of AI, to at least understand what the technology is capable of. This doesn’t mean we need to understand how it works (that’s the data science-y bit), only that we need to know what it can do and, just as importantly, what it can’t do.
In my book, The Executive Guide to Artificial Intelligence, I define eight core capabilities for AI: image recognition, speech recognition, search, clustering, Natural Language Understanding (NLU), optimisation, prediction and understanding. In theory, any AI application can be associated with one or more of these capabilities.
The first four of these capabilities are all to do with capturing information – getting structured data out of unstructured, or big, data. These capture categories are the most mature AI capabilities that currently exist. There are many examples of each of these in use today: we encounter speech recognition when we call up automated response lines; we have image recognition automatically categorising our photographs; we have a search capability read and categorise the emails we send; and we are categorised into like-minded groups every time we buy something from an online retailer. AI efficiently captures all this unstructured and big data that we give it and turns it into something useful (or intrusive, depending on your point of view).
The second group of capabilities, consisting of NLU, optimisation and prediction, is trying to work out – usually using that useful information that has just been captured – what is happening. These capabilities are slightly less mature but all still have applications in our daily lives. NLU turns that speech recognition data into something useful – what do all those individual words actually mean when they are put together in a sentence? The optimisation capability (which includes problem solving and planning as core elements) covers a wide range of uses, including working out what the best route is between your home and the shops. And then the prediction capability tries to work out what will happen next. For example, if we bought that book on early Japanese cinema, it will identify that we are likely to want to buy this book on Akira Kurosawa.
Once we get to understanding, it’s a different picture all together. Understanding why something is happening really requires cognition. It requires many inputs – the ability to draw on many experiences and to conceptualise these into models that can be applied to different scenarios and uses.
While this is something that the human brain is extremely good at, current AI simply can’t do it. All of the previous examples of AI capabilities have been very specific (these are usually termed ‘narrow’ AI) but understanding requires general artificial intelligence, and outside of the human brain, this simply doesn’t exist yet. Artificial General Intelligence, as it is known, is the holy grail of AI researchers but remains an entirely theoretical concept.
You’ll have worked out by now that current uses of AI are typically implemented by stringing together several of these capabilities. Once individual capabilities are understood, they can be combined to create meaningful solutions to business problems and challenges. For example, if I ring up a bank to ask for a loan, I could end up speaking to a machine rather than a human. In this case, AI will first be turning my voice into individual words (speech recognition), working out what it is that I want (NLU), deciding whether I can get the loan (optimisation) and then asking me whether I wanted to know more about car insurance, because people like me tend to need loans to buy cars (clustering and prediction).
This is a fairly involved process that draws on key AI capabilities, and one that doesn’t have to involve a human being at all. When it works, the customer gets great service since the service is available day and night, the phone is answered straight away and they get an immediate response to their query. The process is also efficient and effective for the business, as operating costs are low, staff costs eliminated and decision making consistent, while revenue is potentially increased due to the cross-selling of additional products. The combining of individual capabilities will be key to extracting the maximum value from AI.
Help or hindrance?
The AI framework therefore gives us a foundation to understand what AI can do, cutting through all the marketing hype and jargon, and letting us appreciate whether the technology is being used to help us, to influence us or to manipulate us. It also gives us the basis to appreciate and understand the different risks that AI can present.
Some of the risks associated with AI are inherent in the technology. Many readers will have heard of the ‘black box’ issue. This is where the AI decision-making process (for example, approving or rejecting a loan application) can be pretty opaque. In other words, it can be very difficult to know the reasons why the system has made a particular decision, which if you work in a regulated industry, can be a deal breaking constraint. While there are ways around this lack of transparency, often at the expense of decision accuracy, simply being aware of this tendency is a pretty good start.
Another inherent issue with the use of AI is around its naivety. Or, to be more precise, the issue is around our lack of appreciation of AI’s naivety. I’ve written in some detail about this in a previous D/SRUPTION article but the key point is that AI doesn’t actually understand what it is doing. If you ask an AI to recognise pictures of a dog, it doesn’t have any concept of what a dog is. It simply knows what types of pixel patterns most closely match those in the model it’s learned from numerous other patterns of pixels in known photographs of dogs.
Then there are risks that come about in the way that AI is developed and used. The most common of these is where there is inherent bias in the data that has been used to train an AI. If the training data is biased, the decisions the AI makes will also be biased. Training a facial recognition system mainly on white, male faces will mean, for example, that women of colour will be excluded from benefitting from the system. This is an actual example by the way – the data set of 13,000 faces that many big tech companies originally used to train their systems on consisted of 83% white people and nearly 78% men.
AI also raises an existential risk, both for businesses and the wider world. This is where we put too much faith in the ability of AI without understanding its workings well enough. When we put too high a dependency on AI solving our problems, or running our businesses, we don’t pay enough attention to how those decisions are being made.
This return to ‘black box’ issues can be mitigated to a certain extent by choosing algorithms and development approaches carefully, and also making sure that the outcomes are being measured appropriately. It is not enough to simply measure the success of an AI-enabled business by the profit generated, for example, since the system may be making decisions that are disadvantaging many people or putting profit above normal business ethics. With AI at the core of a business, it’s much more difficult to spot these things happening, especially if it is through many small nudges rather than major strategy decisions.
Most of us will never be able to fully understand the technology of AI or develop an AI model, but I would suggest that we all have a duty to better understand what it can and cannot do, and what its associated risks might be.
As AI’s revolution silently embraces us and increasingly impacts our working and personal lives, it’s no longer good enough to hide behind the excuse that AI is just too complicated to understand. We have to think about AI in terms of its capabilities as well as its limitations, and be able to challenge firms and organisations when we see that inherent value being abused. We must also think about how AI can realise its enormous potential for benefits in a safe and valuable way for the good of us all. That’s the sort of revolution I’d like to see.
For regular expert commentary on AI and more, sign up for our free newsletter.