Disrupted Humanity – how cognitive computing works

Cognitive computing for the uninitiated 

By Dan Mortimer, CEO at Red Ant (sponsored post)

A couple of weeks ago, Channel 4 ran an ad which caused quite a stir. In a perfect combination of utility and personability, it shows how ‘the Sally’, a robotic home help which is ‘closer to humans than ever before’, can carry out all those dull household tasks while the family has fun, then simply be switched off and neatly put away with the rest of the cleaning equipment when her work is done. It is testament to the producers, directors and performers that a number of viewers were taken in – and alarmed – by the prospect of having synthetic people living and working with us in our homes. Of course, it was all just an elaborate way to promote a new TV programme – but for a while, a section of the population truly believed that Blade Runner-esque cyborgs were a reality.

Their willingness to believe in the existence of robots who can imitate and potentially dominate us is an excellent illustration of the current lack of understanding about where we are with artificial intelligence and cognitive (‘thinking’) computers.

Those who have had a glimpse of this new technology sense that it is exciting, important and just might have the power to change the way we look at computers in the future – but there is a degree of confusion over what it is, how it works and why it’s different. That’s entirely understandable – it’s truly cutting edge and requires a significant mind-shift to comprehend what it actually means in practical terms, without all the hyperbole surrounding it and the future of mankind.

Turing – setting the scene

Computing pioneer Alan Turing neatly summed up the theory of cognitive computing in his 1950 paper ‘Computing Machinery and Intelligence’:

‘Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.

From a practical perspective, this means that cognitive computers get better the more we use them, ‘learning’ from our responses to each interaction and putting the information they have into a human context. Instead of being presented with a finished programme crafted to the last detail by developers, users take some of the responsibility themselves for teaching and refining it to meet their needs.

Learning from the past

Historical cognitive projects including Carnegie Mellon’s Speech Understanding Research programme, LISP machines developed by MIT and IBM’s Deep Blue which famously beat Garry Kasparov at chess, have paved the way for today’s ‘thinking’ computers.

More recent developments which have successfully built on these foundations include:

  • Wolfram Alpha, a ‘computational knowledge engine’ which takes externally-sourced ‘curated’ data to provide answers to users’ questions rather than lists of documents or web pages where the answer might be found
  • The Numenta Platform for Intelligent Computing (NuPIC) open source project, which encourages developers to use learning algorithms which replicate the way in which the brain’s neurons learn
  • IBM Watson, which represents a new era of computing with a billion-dollar investment from IBM. Watson processes information more like a human than a computer—by understanding natural language, generating hypotheses based on evidence and learning as it goes.

Time flies like an arrow; fruit flies like a banana – conversations with a computer

Non-cognitive computers do not have the capacity to understand natural human language with its huge variety of meanings derived from context. They can’t distinguish between a ‘bank’ which is a financial institution and one which is the edge of a river, for example. Nor can they extract meaning from ambiguous sentences such as ‘police help dog bite victim’. And there’s no way of telling them they’re wrong – you’ll get the same type of answer every time.

A good way to ilRedant - iDisruptedlustrate this is with early attempts to simulate human conversation, in the form of a chatbot named Eliza. Created in 1966, the aim was for her to provide a relatively realistic interface between human and machine, but the ability to learn anything meaningful was absent. Here’s a sample conversation:

It’s clear that Eliza is delivering stock responses and answers based on repetition of the phrases input by the human element. There is no interpretation or ‘learning’ – the chatbot is simply following a program from which it cannot deviate. And, though developers still spend a considerable amount of time and effort building chatbots which might pass the Turing Test and convince us that they’re human, this is how most computers work – strictly within the parameters of the programs they have been built to execute. Though a little dated now, the term GIGO (garbage in, garbage out) still applies – non-cognitive computers will unquestioningly process any data they are given, whether unintended or nonsensical, and produce equally unwanted or nonsensical output.

The conversation with cognitive computers is rather different. Armed with a corpus of content from almost any kind of data – training manuals, documents, social content and so on – they can be trained to apply context to their responses because they:

Ask

  • Interrogate the user to get insight
  • Use natural language for their dialogue with users

Discover                                                                                                                                                                

  • Work out the reasons behind responses
  • Try to get better answers by prompting for more input

Decide

  • Analyse sources and information models
  • Choose answers based on evidence

Which means that, within relatively short measure, they would be able to find out exactly what kind of ‘bank’ the user is talking about and adjust their responses to match.

What cognitive computing means for the workforce

Until recently, the major useful functions of computers – GUIs, spreadsheets, search engines etc – have all required us to learn their highly-specialised language; something that needs dedication and at least a passing interest in the technology behind it.

Computers that work with us, our documents and the context in which we place them opens up the field for more widespread use outside the IT department. Making interaction and systems development more natural and intuitive means more people can use them, more often. And the more people involved in ‘educating’ cognitive systems, the faster they will evolve and the more accurate and useful they will become.

This will have tremendous advantages in the workplace – cognitive’s ability to access and use a multitude of data points is ideal for scaling knowledge and enhancing learning and development across all business areas – training, HR, operations, sales – anywhere there is a need for the intelligent application of information.

It can also play a vital part in preserving intellectual property. The insight, understanding and wisdom gained over years of employment by staff with perhaps 20 or 30 years’ service are too valuable to the business to be lost when they leave. Cognitive systems can store this irreplaceable information for the next generation of employees and keep all essential knowledge within the company.

Natural language processing – the future of cognitive

While the ability to analyse documents and other content to provide more ‘human’ responses to written questions is undoubtedly a terrific development, there’s a reason why every computer appearing in SciFi from Star Trek to 2001: A space odyssey to Her uses speech as its primary method of communication. Natural language processing via the spoken word is the next logical step towards parity of intelligence – or at least a plausible imitation.

Intelligent personal assistants Siri (iOS) and Cortana (Windows) have capitalised on this idea, using voice recognition software to retrieve information and personal details when requested by the user. Though not strictly ‘cognitive’ (they have difficulty recognising/learning some regional accents, for example), they present a good-enough facsimile which is sufficiently futuristic to satisfy the market.

The rise of the machines?

At its core, cognitive computing is the democratisation of development – ‘supercomputers’ are no longer the sole preserve of academia or computer scientists. Watson, for example, is available through the cloud and hundreds of partners are building apps with it – cognitive is out there and available to anyone who wants to put the next important phase of computing to the test.

Of course, as Channel 4 found out, there are some fears among the general population that this is the thin end of the wedge. But we can rest easy knowing that cognitive computing is set to make our lives easier – nothing more sinister than that.

By Dan Mortimer, CEO at Red Ant (sponsored post)