Learning does not require Intelligence. How an AGI should learn

Have you ever wondered why there are so many unintelligent PhD’s?

Perhaps the greatest impediment to why no one so far has proposed a concrete algorithmic proposal to implement AGI is the great confusion as to what Intelligence is. Learning has not even remotely anything to do with Intelligence (NLU has, however but unambiguously worded text does not require NLU and can teach an AGI without the use of Intelligence) and people with several advanced degrees and published works in STEM fields could therefore be very unintelligent – and often are.

The fact that those PhD’s always test high on IQ tests exacerbates the misconception that learned folks in highly complex fields must be intelligent. IQ tests do not test Intelligence at all. If they would, 99.9% of humanity would test “extremely low Intelligence”. Instead, Psychologists opted to devise a test for simple pattern recognition that the median of people score smack in the middle (100) on. That avoided many a sour face! Because there is a very loose correlation between dimwitted folks scoring low on IQ tests, and people with Ph.D’s high and because people with Ph.Ds are considered intelligent, hey presto – the “Intelligence Test” was born and accepted as valid.

Learning is unrelated to Intelligence. It neither is a function of Intelligence, nor does it increase Intelligence. This means that many people with a very low Intelligence are capable of becoming Quantum Physicists (as long as they don’t have learning disabilities, since people with learning disabilities are nearly never intelligent, although people without learning disabilities are merely usually not intelligent), and that learning all non-fiction books by heart will not make anyone one iota cleverer.

There still are many exceptions to the correlation of learning ability and Intelligence, which explains why most highly educated people have high IQs but seem to be unable to do a patentable invention or to argue a philosophical problem without committing a plethora of logical fallacies.

What is Learning? What does Learning do? Why doesn’t it need Intelligence? Why can’t it increase Intelligence? Learning can’t improve the Intelligence algorithm. It can however greatly increase the quality of the output of the Intelligence algorithm.

Learning itself, on the other hand, does not require any Intelligence at all. Which is great, because that’s one less headache for AGI programmers. A caveat is that in order to detect internal inconsistencies in the learned data, Intelligence may be required. The great majority of human learners unquestioningly accept contradicting data, with is an indication that they are not intelligent.

I define Learning as “Observing World-Objects interact and recording the consequences, as well as the apparent reasons for their actions”. This activity merely is recording. Intelligence interprets those recordings. When I see someone hit a ball with a tennis racket and the ball changes direction at the moment of impact, I blindly make a mental note of that fact. I do not need to make a logical deduction of why that happened. Perhaps I do, but that is not done by my learning module but by my Intelligence. I do not need to understand kinetic energy, elasticity – nothing of the sort. “Gruk sees ball, ball hits racket, ball goes other way. Gruk not know how. Gruk only know: Ball hit racket, ball go different direction.

Any other learning isn’t learning at all but something euphemistically called rote learning. Rote learning forgoes actual learning as defined above but instead of the observation and rule-derivation of cause and effect, someone tells you to simply believe him and you copy his learning. It could be that his learning is faulty or incomplete, it could be he is lying to you to push an agenda, it could be that there is an error in the course book and last but not least, it could be that he faithfully reproduced that what he learnt, and that the one he learnt it from did the same, all the way to a scientist who made a mistake in his experiment – or in its interpretation and somehow that passed peer review. Rote learning is a “hack”. It’s like earning money via working vs. earning money via stealing. That can’t be called earning anymore and it comes with so many potential problems that it’s a terrible idea. The problem of incorrect axioms in contemporary Science all-pervasive. The entirety of Quantum Mechanics for example is, for lack of a better word, nonsense. The same with a lot of accepted History.

So the only way an AGI should learn is to derive its own World-model from observed reality. Since that is impractical, the closest to that is to examine original sources of knowledge such as detailed descriptions, by its author, of physics experiments. Or unadulterated downloads from CERN or NASA. Or the original copies of witness statements, made to authorities. Or history books, where accounts from multiple sources are compared by the AGI and verified for consistency, both in reported fact as well as whether they do not violate facts, previously internalized by the AGI.

After the learning is done, Intelligence can derive conclusions.

Fully understanding natural language, contrary to popular belief is not AGI-complete either. Context-based understanding can fully be solved using statistical methods and correlation, even in highly artificial edge case exceptions.

I only publicly disclose AGI-ideas that should be obvious to most serious AGI researchers. I also have developed trade secrets, major breakthroughs to realize AGI.

Disqus Comments

additional-intelligence