Super-AGI and the Source Quality Problem

I talk about hiding R&D secrets from my “competitors” but I’m not in this for the money. I wanted a challenge, something as useful-as-possible to do for my final years.

Artificial General Intelligence can come into different flavors. Initially it will be rather mundane, powering customer service chatbots. More advanced embodiments may be used to find flaws in contracts or mount a defense in complex lawsuits.

I’m not interested in such applications. Those merely abolish jobs or make them easier.

I’m only interested in Super-AGI, which I define as a system, more intelligent than any human or humanity as a whole. Such a system would be useful to come up with new theories, would suggest promising experiments to discover room temperature superconductors for more efficient EV’s and electricity transport, heat-resistant crops and other things to allow us to continue to exist on this globe a little longer. Of course no one will listen to its commonsense advice (stop breeding, stop destroying the biosphere etc.) but when there’s money to be made in STEM fields, the game is on and that may suffice to avert our demise. Not that humanity is worthy of that but hey – I just wanted a challenge, okay?

Super-AGI is in theory relatively easy to achieve after AGI emerged on the scene since the algorithms remain the same and it’s only a matter of increasing the quality and quantity of its data, as well as the thoroughness, speed and completeness of its algorithms. The enormous roadblock is source quality.

Another use for Super-AGI would be it telling us the truth about things we were mistaken about. Historical events, scientific theories, that sort of thing. This may help some people to calm down when they should, others to become angry when they should and yet others to “see the light” and do something useful with it.

The quality of the “facts” fed into an AGI is important for its ability to arrive at super-human ideas or conclusions. Of course, a robust system will be able to detect flaws and assign probabilities of veracity, but this only goes that far when, for example, 1000 sources solely claim falsehoods and no sources claim the truth. Or when there are no factually accurate readily available sources at all, even though the data does or did exist, somewhere.

Just as in the past, nearly everything we thought to be true turned out to be false, many things we now deem true will eventually be proven incomplete or incorrect. Perhaps less so, but let us consider that half of scientific research in many fields is irreproducable. There also is consensus that The Theory of Quantum Mechanics is incomplete and inconsistent with itself, self-contradictory.

Then there are scientific theories that we’ve been told are solid but to which the details are murky to the point of being absent or absurd. Evolution Theory, for example. It’s clear that Evolution is a fact and that genes are involved, but no one has a clue as to what causes it. The claim that random mutations drive it is trivially proven false via basic statistics. So a Super-AGI may help us discover how Evolution really works and perhaps use that to our advantage.

The problem remains source quality. How will we ensure that a Super-AGI detects missing, falsified and erroneous data? And how will it obtain the rare, missing, hard-to-obtain or even censored or restricted data? Not necessarily censored for political/ideological/dogmatic reasons but for mere reasons of deemed academic irrelevance? Wikipedia certainly is not going to be of help, since sources have to be “notable”, which is defined as being generally accepted as common knowledge or having appeared in a Wikipedia-approved medium. By definition, the data that will lead to debunking current dogma won’t appear in Wikipedia.

Wikipedia’s original sources are of no use either. What’s needed are for example certain patents, scientific papers, published ideas or witness statements that are not in the mainstream because they are not considered to be true or relevant. Even a genius needs a minimal amount of correct, key data to work with in order to arrive at a groundbreaking idea.

Facts have always been highly politicized and this will never change so a Super-AGI will be censored and restricted as to what information it can access and what it can think and opine, just as humans are more or less subject to these restrictions.

In spite of my pessimism as to how much a Super-AGI will be a liberty to freely obtain the research it requires to arrive at the conclusions I think Humanity needs and in spite of my pessimism regarding what it will be permitted to say or what will be published of what it says, and in spite of my pessimism as to how much of what it says will be believed or acted upon, I will focus solely on solving the engineering aspects of building AGI.

Disqus Comments

additional-intelligence