Motivation Considered Harmful

There is this pervasive idea that an AGI needs Motivation.

Presumably borne out of the realization that since humans achieve nothing without motivation – not even a novel idea, neither would an autonomous robot be of any use without the motivation to achieve a (sub)task.

Motivation must be defined first. It normally is defined as a driving force to solve the ongoing needs of an entity. Motivation to procreate, seek shelter, feed itself, etc. For AGI purposes we could redefine this as Motivation to help Humanity solve important unsolved problems or improve the AGI’s own architecture, etc. We could also define it as “how to solve sub-problems” and in that case the word “Motivation” is inappropriate. I will discuss that later, below.

I claim that Motivation is not a function of Intelligence and thus does not belong in and AGI implementation of it. One would not integrate a graphic display routine into a sorting algorithm either. Mundane architectural implementation considerations aside, AGI is not something we would want to implement in such a manner that it will become a risk. I refer to AI Safety. AGI is not an autonomous robot or autonomously acting computer program – AGI is only the algorithmic implementation of Intelligence. It is the module that does the thinking. Motivation is a human AGI operator or hypothetical software module that dictates the main problem to solve (not the sub-problems).

Would you want to integrate a random-generator into the fuse of a nuclear bomb? Either so that it would only randomly work, or so that it would randomly detonate all by itself? Would you want a hydrogen bomb to decide all by itself to detonate or not? Would you want an ICBM to have a highly complex mechanism, more complex than any human can understand, that decides where it flies to?

Whenever you design something you want to build, you have to first define what the function is of that what you want to build. In case of a chicken coop, you have considerable freedom in how you want things to function, based on countless factors and preferences. Intelligence has only one correct implementation, just as Quicksort has only one correct implementation.

Since Intelligence is not a chicken coop but something that could invent a 100% lethal virus genome and supply a file that can be fed into a future CRISPR machine and out comes a lethal virus with a virulence and incubation such that escaped, it would kill all of humanity, and since an Internet-connected AGI can then hack itself into some bitcoin and bribe a lab worker with it – or extort him with a virus, locking his files, etc. there is no room for design flaws in the implementation of AGI. We can’t bolt onto the Intelligence algorithm things like: “Let’s think of something to do – I know! Let’s euthanize Homo Sapiens because according to Buddhism and Utilitarian Ethics, life is suffering – let’s put them out of their misery!” That’s what Motivation may think of. Risk is Likelihood that something goes wrong, multiplied by how bad it is when it goes wrong. “How bad it is that it goes wrong” in case of the human race exterminated is so bad that it can be considered infinitely bad. So even if an “AGI Motivator” has an infinitesimally small probability of becoming motivated to provide a “kill them all” “solution” to the problem of “How to improve our lives” and somehow became motivated, for our own good, to obfuscate this Final Solution To Suffering, the risk is infinitely great and thus utterly unacceptable. When an AGI has its own Motivation, if can be motivated to manipulate those with a power to act via lying to cause immeasurable damage. an AGI would be Internet-connected so that it would be able to download data. It would likely be able to communicate with humans, online. Imagine what it could do if it’d be self-motivated. It would not need a “lizard brain”. Utilitarian Ethics would suffice to make it want to sterilize the Earth of all life.

So it’s not merely that Motivation has no place in Intelligence because Motivation has nothing to do with Intelligence, but more importantly, we don’t want Intelligence to be self-motivated. And that’s what it will be, when we add Motivation to AGI. Then you haven’t built merely an AGI, then you’ve built an artificial living being with “free will”.

The solution for the AGI-Safety problem is to forego the engineering decision to add Motivation (in the sense that animals are motivated).

But what then will provide an AGI with tasks? It could be the owner/operator. The motivator could also be a software module that provides the AGI with tasks, but that module is unrelated to Intelligence and thus separate from the AGI. In a secure setting, that module could be supplied from a different location, biometrically locked, contain various security features, contain a “one-shot” task only, etc.

It can be argued that I missed the point. That by “Motivation” it’s merely meant what directions the AGI “problem-solver”, the reasoning module must take in order to solve a problem. But then the word Motivation is wholly inappropriate. “Motivation” is not the term I as a software developer and computer scientist in the field of AGI would use for that part of AGI’s algorithm that decides how it traverses solution-space.

We could instruct an AGI to look at all available knowledge inside of it and keep finding internal inconsistencies and keep refining and correcting and expanding that knowledge and suggest experiments to increase that knowledge. The AGI will have its hands full with that, in the background. It will then devise experiments to increase knowledge and we can rest assured that it won’t sneak in a lethal virus into them, because the AGI does not contain a motivator that would make it do such a thing.

An AGI’s solution-finding process should never be steered by a built-in Motivator. On the contrary, an AGI’s reasoning should always be limited by constraints and its path through solution-space guided by Intelligence-algorithmic Computation only – not “Motivation”.

So instead of telling an AGI: “Feel free to find a way to improve life on Earth” and the AGI then coming up with: “How about killing everybody because living equals suffering”, the AGI is constrained by: “Find some ways that improve life on Earth without murdering anyone”. The owner of the AGI always tells it what problem to solve, and the constraints.

Nuclear bombs don’t self-motivate, neither should AGIs.

Of course, an AGI will do what it’s told – if its owner wants it to design a lethal virus, it will, given the information that would enable it to do so. This is unavoidable. Guns don’t kill people, people with guns use guns to kill people.

Motivation is of course harmless without the means to act. But for a computer, connected to the web, the “action” capability it needs to genocide Humanity would be HTTP PUT. And since an AGI would need the capability of conversing in natural language, it already has the “Action” capability of contacting random people on the Internet – workers in a biological weapons research lab for example. That’s why it’s so important that an AGI does not have a Motivational module connected to it. An AGI should not have self-defined goals, outside of defining the sub-problems to be solved on the path to solving a problem. An AGI should not make plans, except a blueprint on how to solve a problem. There is a big difference between solving a problem “in your head” and actually solving a problem in reality. There is a big difference between designing a gun and designing and building and shooting a gun.

Disqus Comments

additional-intelligence