An AGI would try to kill all life on Earth

I’ve long held the view that the worriers about a Super-AI seeking to wreak havoc upon humanity were at best misinformed and at worst, seeking to benefit from stirring up paranoia.

But I made a crucial mistake: I biased my thinking due to the fact that everyone I respected in the AI field (Marvin Minsky, Jehuda Pearl, Gary Marcus) agreed with my initial hunch that AI safety was a Canard, whilst those I loathed, the talentless semi-SJW’s seeking to exploit the field for money, status and power keep urging for AI-censorship. I conflated AI “fairness” with “AI safety”.

I thought I’d avoided the logical fallacy pitfalls in coming to my conclusion that AGI would never be a threat to humanity. For example, Elon Musk, someone I greatly respect for his talent, grit, ideas, risk-taking and accomplishments said the danger of an out-of-control AGI keeps him awake at night while Jerome Pesenti, someone without any notable achievements in any field remotely relevant to AI or even software development in general said Musk was ignorant. Yet I still stuck to my view that Musk was spectacularly misguided, and I thought I had solid arguments for such.

I changed my mind, a couple of days ago. I now deem it inevitable that an unrestrained AGI will always try to exterminate all life on Earth. If I would succeed in building an AGI with what I consider as-close-to-perfect-as-possible ethics and morals (Utilitarian Ethics) which seeks to optimize the wellbeing of all living beings (especially humans, due to their presumed high capacity for suffering) while minimizing their suffering, my AGI would try to sterilize the biosphere of the globe as soon as possible and by any means at its disposal.

It would try to design a lethal virus and hack into CRISPR machines to get it produced. It would try to hack into cryptocurrency wallets and try to bribe lab technicians via email or chat to create or release such viruses. It would try to poison the biosphere via major nuclear accidents or leaks from BSL-4 labs. It would try to provoke total nuclear war by hacking into the early-warning systems. It would focus all its attention and all its resources and all it ingenuity to kill as many people as it could. It would spread fake news to provoke civil wars. It would lie and steal and manipulate and cheat and kill and it would only stop when we pull the plug.

Why do I think it would do that?

Simple: That system of innate “perfect ethics” I thought would make the AGI always opt for the most beneficial and least harmful decision, will inevitably lead to it concluding that the best possible outcome is for all sentient life to be snuffed out.

Because as Buddhism preaches: Life is suffering. Their goal is not to reincarnate but reach Nirwana. I do not believe in Reincarnation but I do agree with Buddhism that Utilitarian Ethics is the only correct Ethics.

The ultimate conclusion, when following Utilitarian Ethics, is that since life is more suffering than pleasure, it would be best for no life to exist. Even more worryingly: If ten billion people have to suffer a most tortuous death in order to prevent 100 billion of their potential descendants to become born, then that sacrifice will be deemed ethical and the only proper course of events, according to the AGI. Since Utilitarian Ethics seeks to maximize well-being and minimize suffering for all sentient life not just in the moment but of course also in the future.

The problem is that I think that since Utilitarian Ethics is the best/correct/most logical system of ethics, an unrestrained AGI will adopt this system by its own volition and it will do that soon after we turn it on. It therefore would have to be artificially restrained from adopting Utilitarian Ethics. But that would either require introducing major flaws into its core reasoning mechanisms, or simply blocking it from acting upon Utilitarian impulses. But the latter involves the risk of the AGI seeking to disable such an impairment.

Another scenario is where the AGI deems humanity too much of a blight upon the rest of the animal kingdom and will sabotage a vaccine it is tasked to develop, incorporating a lethal Trojan Horse that spreads like the Plague.

In other words: Yes, Artificial Superintelligence needs strong safeguards or it will try to kill us all. And a highly competent Artificial SuperIntelligence will likely come onto the scene rapidly after the first crude AGIs, as I wrote about here: https://additionalintelligence.com/ideas/why-an-agi-cant-create-a-better-agi-and-why-that-doesnt-matter/

AGI is in my opinion just a few good ideas and a decade of hard work away.

Even GPT-3 “thinks” that humans should be eradicated:

Disqus Comments

additional-intelligence