It is a popular idea that a sufficiently advanced Artificial General Intelligence would be able to improve upon its own code to make itself more intelligent, or even create an entirely improved AGI.
This idea appears to be prima facie logical and reasonable but I argue that this is a complete impossibility. Not that this matters in terms of our ability to create artificial Superintelligence via the AGI-route, however. It will just happen in a much more reliable and straightforward manner.
Intelligence is an algorithm. Therefore, any Turing-complete machine is capable of implementing it. Consider the algorithm for Bubblesort. When it has been perfectly implemented, it can only be improved upon by using an algorithm for a more efficient sorting method. There exist highly optimized sorting methods. Sorting methods can be optimized for a specific data-type or initial sort order, but it’s agreed that for most practical purposes, Quicksort is much faster than Bubblesort.
I claim that a perfectly implemented algorithm for General Intelligence, because it is general, is by definition also the best possible algorithm for General Intelligence.
Just as I claim that the best known/achievable general-purpose sorting algorithm is the best possible algorithm for General Sorting. The performance of the best known sorting algorithm is not far removed from what’s mathematically proven to be the theoretical optimum.
I liken the algorithm for Intelligence to a circle. A close-to-perfect circle can’t much be improved upon. It’s already very round and can’t be made more rounder than let’s say 0.1% more. When the Intelligence algorithm has been debugged by its human programmer and when its heuristics have been fine-tuned via supervised and unsupervised learning, then the programmer has implemented the algorithm for Intelligence correctly.
I claim that the algorithm for Intelligence is simple enough to be possible to be implemented near-perfectly by a single capable programmer on the same level of competence as a programmer, capable of writing a strong Chess engine. There exist at least dozens of such programmers.
Therefore, an AGI with a near-perfect implementation of the Intelligence algorithm (and I claim there exists only a few practically perfect algorithms for Intelligence, just as there exists only a few practical algorithms to draw a circle), will be unable to improve upon itself because just like it is impossible to improve upon the algorithm for a circle, it is impossible to improve upon the algorithm for Intelligence.
So how does an AGI improve? How do we achieve Artificial Superintelligence?
Much easier than we though would happen: By increasing the processing power and by increasing the amount of useful knowledge, available to the AGI. When we do that, we’ll likely discover and fix some more bugs and figure out some more optimization tricks, but that’s all. No need for our AGI to do some kind of superhuman invention of an incomprehensibly-complex super-AGI.
There is a third way of arguing my case: Look at it from the point of you yourself – your own brain. You can’t self-reprogram your brain to become more intelligent. You may score higher on a IQ-test but that’s not the same as having raised your intelligence. It’s controversial with science-deniers but Intelligence is innate. In animals such as humans, it can be environmentally degraded by bad nutrition and environmental toxins and other factors, but a brain’s maximum capacity for Intelligence is 100% determined by genetic factors. Birds are born with bird-brains and they can never be as intelligent as humans. There exist no bird that can program a computer or invent an industrial machine. The brain of a bird can’t discover a way to become more intelligent because a bird brain is not intelligent enough to do that. Neither is a dog intelligent enough to find a way of thinking that makes it more intelligent. The reason for that is because Intelligence is an algorithm that relies on “CPU cycles” and “RAM” and “software in ROM” and “system bus architecture” and bird- and dog-brains simply don’t have what it takes and no amount of “figuring out what’s wrong with their algorithm” will solve that.
Neither does an AGI have to “figure out how to improve its own hardware” because I claim that the processing power and storage capacity available in an old laptop is more than enough for Superintelligence.
Many claims, I agree. But they are not based on hopefulness, ideological conviction or wild guesses. I think I comprehensively defined the algorithm for Intelligence and I estimated the hardware resources required for human-equivalent “supergenius” level in-real-time and I think the machine I own ($70,000 in 2019) will be more than sufficient.
Intelligence is merely an algorithm. Intelligence is completely useless without correct information. When we implement the algorithm for Intelligence correctly and we feed it with Wikipedia ‘s contents only, that may be insufficient for the AGI to invent a Theory Of Everything, because Wikipedia disallows anything that isn’t accepted Science from notable sources. Wikipedia also does not lists all data in every physics paper ever published, either. Nor does it lists the peer reviews, fringe theories, patents etc. Nor does it contain the contents of Physics books.
So after the AGI’s Intelligence algorithm has been correctly implemented on a powerful computer, all that rests is feed the machine with all available human (claimed!..) knowledge and let it use NLU to build a World-model. The result is Artificial Superintelligence.
Apart from computational- and storage requirements, the only limit to the capabilities of that Superintelligence is the quality and quantity of the data. IQ does not measure Intelligence and an AGI’s “IQ” can not rise to some arbitrary number. Intelligence can’t be given a rating. Intelligence is binary – something is intelligent or something isn’t. Most humans have flawed implementations of the innate Intelligence algorithm. Also, their knowledge database is extremely limited so even if they were perfectly intelligent, their intelligence would only be able to do useful reasoning after it was fed with the missing data. And the hypothetical perfectly intelligent human still is severely constrained by processing speed limitations as well as the limited capacity of the brain to manage hundreds of Terabytes of complex information.
An AGI will have to rely on only that data that us Naked Apes fed it. It will discover many flaws and inconsistencies in our claimed scientific dogma and it certainly will have suggestions as to new experiments to be performed in order to get answers to questions it needs answered and evidence that falsifies or confirms its own alternative theories.
I only publicly disclose AGI-ideas that should be obvious to most serious AGI researchers. I also have developed trade secrets, major breakthroughs to realize AGI.