My first and second steps towards AI

Acorn Atom z roku 1980
My first computer, the Acorn Atom

I got an Acorn Atom for my sixteenth birthday, in 1981. 1 MHz, 8 bits, 512 bytes free RAM for BASIC or Assembly.

One of my first programs was “AI”: I would enter:

>Fido is a dog
>Dogs have legs
>Does Fido have legs?

It would output:

>Yes because Fido is a dog and dogs have legs.

The machine was too limited to do much more. It understood plural & singular and could do these simple generalizations. I was fascinated by AI at that time, and had been since age seven, when I desperately tried to find more library books about computers and even built my own “computer” out of a painted box, adorned with lamps, switches, batteries and wires.

I’m 54 now, have decades of software development under my belt and bought the “biggest” machine that the fuse to its electricity socket and the capacity of its airconditioning could handle. It’s not that AI would need such a powerful machine – it’s just that being able to store and backup enormous amounts of data and to process that data in a massively parallel fashion, speeds up development.

I do not think it a viable approach to feed an enormous amount of complex data into a black box and expect AI to ever emerge. I believe in baby steps, in bootstrapping an AI like a toddler learns to use language and absorbs knowledge of the world. A certain amount of innateness needs to be implemented.

I also think that jumping in and starting to code all kinds of algorithms is a brazen, reckless approach. I think one needs to spend a long time pondering the issues and start at the beginning: What is Intelligence, what are the things it does and how to implement those processes in a computer.

But since I’m already with one leg in the grave, it didn’t make sense to sit on my ass until I finally knew what to start programming, so meanwhile I collected a gigantic amount of data that the system should be eventually able to learn from. I also will let the system learn language from a massive corpus of text from various sources. Data acquisition took a full year on a 200 mbs line. As files downloaded, I thought and thought and wrote several whitepapers on natural Language Understanding, Knowledge Representation and how, concretely, an algorithm for General Intelligence would look like. I’m disinterested in the Academic literature or what academics opine, since none of it is of much use and whenever I googled a subject, I noticed that my own ideas to solve sub-problems are on a par with the State-of-the-Art or above. I do read about the many forms of reasoning and logic and about failed attempts to implement AI and I read books on the challenges of AI. “AI” being True AI, of course – not the Connectionist nonsense they call AI nowadays.

I’m now at the stage that I’ve refined some of these whitepapers to a level that I can start coding. I also write modules that clean up the textual data I extracted from PDFs. I bought the sourcecode for a PDF-to-TXT converter and fixed dozens of showstopper bugs in it. But there still were issues with multi-column text, ligatures not being converted, abbreviated words that had to be restored to their original form, headers, footers and page numbers having to be removed and bad OCR to be corrected. And of course, anything that is not high-quality modern English should be discarded – at least for now. Since my raw data is dozens of TB over millions of large files and the cleanup process is CPU-bound, I’m glad to have 128 cores to let 100 processes run in parallel.

Someone familiar with my work in the early-mid noughties may notice that I do not consider my pioneering work in Machine Learning for Go to be a step towards AI, even though it was groundbreaking to “connectionists” and laid the foundations for AlphaGo.

Disqus Comments

additional-intelligence