Both Elon Musk and Stephen Hawking had a go, but the techy press reacted with fury. If you have ever had the pleasure of dealing with a slow computer, you will indeed understand why there is such cynicism related to the idea of artificial intelligence (AI) ever posing a threat to humankind. Yet Messrs Hawking and Musk have a point all the same. AI is developing fast. It may even change what it is to be human. It may even disrupt humanity itself.
Elon Musk, demigod of Silicon Valley, has said that AI poses a bigger threat to humanity than nuclear weapons. Professor Hawking told the BBC: “The development of full artificial intelligence could spell the end of the human race.”
Critics of Musk and Hawking are many, and they come from the higher echelons of the scientific community. However, they are too complacent and we need to face up to the threat. That does not mean trying to stop the advancement of AI. We can no more do that, than King Canute could stop the tide. However, we do at least need to start planning how we can ensure the effects of AI are benign and not cruel.
Besides, even if humanity can survive the rise of the machine, it seems doubtful that certain professions will. Accountants, lawyers, and doctors will count amongst those who will face enormous gales of creative disruption thanks to AI.
The first thing to bear in mind is that there is much that we don’t know. What is it that makes us able to think, to be self-aware, to be ourselves? Will computers ever be able to replace us? Who knows? The problem is that by the time we do know, it may be too late to do anything about it.
If you sign up to the school of thought that we are the products of evolution, you will probably agree that the evolution is the most innovative force ever known. There is one snag – it is very, very slow. Actually, it is slower than that.
But that is because evolution is the product of chance, mistakes in hardwired changes in DNA. What we learn in our lifetime has no bearing on evolution. If instead the duplication process that gives us our DNA makes an error and that error makes us more likely to survive and have children, who also survive, then that error will remain, reproduce and may eventually give rise to a new species. Over three or four billion years, evolution gave us the rich tapestry of modern life. Incidentally, evolution often works in fits and starts. For millions of years it may create little change; it can move into a kind of dead end or equilibrium, only to have an external event disrupt it, such as a meteorite wiping out the dinosaurs or the formation of the Rift Valley, and this can lead to rapid change.
Ever since a two-legged ape learned how to talk, we have seen a new form of evolution; one that builds upon what happens during our lifetime. We call this cultural evolution. This works an order of magnitude faster than Darwinian evolution.
There is another type too: technological evolution. This one is accelerating. It accelerated with the emergence of writing, then again with the printing press, and again with the internet. What makes it so formidable is that it is not the product of chance and errors; it is a deliberate process.
However, there is yet another version of evolution, which can create in seconds what natural evolution would have taken million years to produce. This is evolution in a digital environment. In this setting evolution can be semi planned, artificial parameters can charge change, it can be programmed so that there are no dead ends, no periods of equilibrium. Digital evolution can change the world – virtually speaking.
Imagine the potential for that when algorithms evolve. If natural evolution created us, what can digital evolution do? One thing is certain; it will create in years, months or even weeks, what nature takes an age to create.
Of course, the power of computers limits AI. Moore’s Law is ending. Silicon based machines just cannot carry on getting faster for much longer.
Moore’s Law is dead; long live Moore’s Law.
Instead, we will see computers made from graphene chips, molecular electronics, spintronics and quantum computing. Computers will carry on getting faster but they won’t be made from silicon chips.
Imagine the potential as computers a thousand times faster than the latest state of the art model, combine with the evolution of algorithms.
Even that is just part of the story.
The human brain, indeed the rat brain or cow brain for that matter, works differently from a computer. Computers work via logic and millions of switches that are either on or off. The brain comprises neurons, which form synapses with other neurons. Imagine a computer that has electronic neurons!
IBM has done more than imagine; it has created a new kind of chip. It calls it a SyNapse chip. It has one million electronic neurons that form links – let’s call them synapses – with each other. What’s remarkable about this chip is that it requires an order of magnitude less power than conventional chip with similar processing ability. To borrow a paragraph from the MIT Technology Review, “Although the new SyNapse chip has more transistors than most desktop processors, or any chip IBM has ever made, with over five billion, it consumes strikingly little power. When running the traffic video recognition demo, it consumed just 63 milliwatts of power. Server chips with similar numbers of transistors consume tens of watts of power—around 10,000 times more.” See: IBM Chip Processes Data Similar to the Way Your Brain Does
The thing about computers is that they find the things difficult that we find easy. On the other hand, they can do the things we find hard with ease. Take a sport, for example. You know that if you kick a ball, or hit a squash ball, in a certain way you will achieve a certain reaction. The processing power required to bend a ball like Beckham is enormous.
However, a chip that uses neurons and synapses is a different matter.
Qualcomm has developed what it calls Neuromorphic chips, which it has modelled on biological brains. “They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways,” says the MIT Technology Review. Matthew Grob, Qualcomm’s chief technology officer, said: “We’re blurring the boundary between silicon and biological systems.”
IBM’s SyNapse chip has one million neurons. The human brain has around 300 billion neurons, so it has a way to go yet. Returning to the MIT Technology Review, it stated: “Even if neuromorphic chips are nowhere near as capable as the brain, they should be much faster than current computers at processing sensory data and learning from it.”
And the technology is evolving, as are algorithms. Combine the ideas of Neuromorphic chips or SyNapse chips with the possibilities of greater processing power that comes with graphene, or quantum computers with evolving algorithms, and you can see that Musk and Hawking have a point.
At the very least, jobs that we have assumed machines will never be able to do, may go the way of the dinosaurs.