Neuromorphic chips will help AI to think like the human brain
Wikipedia says: “Neuromorphic engineering is a new interdisciplinary subject that takes inspiration from biology, physics, mathematics, computer science and electronic engineering to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.“
Qualcomm is building the smart chips that can be integrated into future computers. It’s all about pattern recognition, which is something that the human brain is particularly strong at doing and computers are traditionally weak. It’s the key point relating to the expandability of artificial intelligence.
See below for an example from MIT.
A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls back and spies another action figure, Spider-Man. This time Pioneer makes a beeline for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.
This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.
IBM is also experimenting with building chips around exactly this architecture – getting to the point of a one million neuron brain-inspired processor. See below for an infographic that explains it perfectly.