<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">

Dec 14, 2016

Narrow vs General AI - Is Moravec's Paradox still relevant?

Written By:

Sally Doherty

Try AI notebooks for free

Try IPUs in the cloud with a zero set-up, pre-configured Jupyter development environment on Paperspace

Try now for free

Join the IPU conversation

Join our Graphcore community for free. Get help and share knowledge, find tutorials and tools that will help you grow.

Join on Slack

How close are we to general artificial intelligence - machines with human levels of cognition? While machine learning has brought us machines that can beat people at Go and sift through vast amounts of data much faster than an individual or team, we still don’t have a robot that can unpack the dishwasher, change the beds or put shopping away.

“Today’s AI is brilliant at very narrow competencies, whereas humans are good at pretty much everything”, explains Dr Sean Holden of Cambridge University’s Computer Laboratory, in the latest edition of CAM Magazine. “Most AI researchers don’t try to solve the whole problem because it’s too hard. They take some specific problem and do it better.”

Tractica comes to the same conclusion, forecasting that narrow AI techniques, used to solve specific problems, will dominate AI application in the next 10 years, accounting for 99.5% of AI revenue between 2016 to 2025.

Both views reflect Moravec’s Paradox, the discovery by artificial intelligence and robotics researchers Hans Moravec, Rodney Brooks and Marvin Minsky in the 1980’s that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.

Moravec wrote, “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

As a society, we’ve always associated intelligence with things that educated humans find difficult, like playing chess, doing maths or critical analysis of Shakespeare. These are obvious applications of conscious reasoning, but thought takes many subtler forms, such as interpreting sensory input, guiding physical actions, and empathizing with others.

It’s these categories of intelligence that will take much more time to implement in AI. While research into multi-tasking machines and AI with transferable skills is heating up, the jury is still out on whether true human level cognition is possible (or desirable) in machines. In the meantime, the advances in narrow AI are gaining in speed.