Wikipedia: “Moravec’s paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor skills require enormous computational resources.”
Another source: “Moravec’s Paradox states that it is easy to train computers to do things that humans find hard, like mathematics and logic, but it is hard to train them to do things humans find easy, like walking and image recognition.“
Moravec articulated this “paradox” in the eighties, a time during which computers were wholly incapable of reasoning at even the level of the youngest children. So his “paradox” offers no evidence that reasoning would require little computation, since in Moravec time there did not exist a computer, capable of complex reasoning. Computers are designed to be very fast at Boolean algebra, simple “logic” and computation, but complex reasoning is a different matter altogether.
“AI reasoning” in the eighties was not much better than: Q: “Dogs have legs. Fido is a dog. Does Fido have legs?” A: “Yes because Fido is a dog and dogs have legs”. Perhaps that was the reason “logic” was considered easy, for computers. But that was even an invalid argument in the eighties, since reasoning of a halfway sophisticated level, such as what happens in a Chess program when if finds the best move, easily required, and still requires, a trillion CPU cycles.
And then there is the misunderstanding that (relatively simple) Artificial Neural Networks do “computation”. They do not such thing. Yes, SIMULATION of a large NN in software by a von-Neumann machine requires massive computation by the computer, due to the need for matrix multiplication. But after a feed-forward ANN has been trained, its weights can be represented by resistors. If the ANN would be “cast” in resistors, no computation is done at all by the feed-forward network. Mere resistive attenuation. When Moravec voiced his paradox, multi-layer perceptrons were in vogue for AI and those can be simulated without any computation by a resistor network. So if he referred to ANNs in computers, his “paradox” seems rather trite.
Then there is the fact that neural networks are innately bad at reasoning – so far, no such network has been conceived. Human brains also are very bad at reasoning, as we can observe daily.
The reason that human brains are good at things like facial recognition and running up stairs and other things that computers were (until recently) bad at, is that human brains use utterly sophisticated configurations of up to billions of innate neuronal structures to achieve such feats.
But performing the tasks of walking and image recognition do not require intelligence, neither does mathematics. Also “skills” do not require intelligence. Skills are applied knowledge and only are sometimes acquired via applied intelligence.
I have a skill: Perfect Cappuccino foam without the need to look, count the seconds or hand-monitor the beaker temperature: I close the valve, two seconds after the bubbles start sounding more mutedly.
When I think of “reasoning”, I think of figuring out what is wrong with the contemporary theory of Quantum Mechanics, and finding the correct Theory Of Everything via brute force. Or inventing a totally novel linear actuator with unique, very useful properties (doing that made me a multi-millionaire and enabled me to buy a house in a Norwegian forest and work on AGI all day).
Such reasoning, in the former example, may require enormous computational resources, while in the latter example it may be an “Eureka” moment. I will not disclose my AGI algorithms here or even my definition of Intelligence but I claim that Intelligence does something that always involves computation, and that the more useful its reasoning, the exponentially more computation is required.
Disqus Comments