It’s 3am right now. I’ve been drifting off to sleep with Spotify mobile, only to wake up with a start with a weird thought.

There’s a well-known idea called “the technological singularity“, which is the point where machine intelligence outstrips human intelligence – where perhaps the machines become self aware, too. Its treatment in popular fiction has basically been either Short Circuit or Terminator, but they both have that core self-awareness as the theme (after all, a super-intelligent computer that doesn’t know it’s the smartest guy in the room doesn’t make for a great movie).

What prompted me to start thinking about this is the fact that earlier tonight I played through the developer commentary for Left 4 Dead, and they were talking about music being at the “pre-conscious” level. Some part of my brain stored this, and with consciousness drifting (well, pretty much drifted, actually) suddenly two and two hit together and made four. But to explain my thinking, I’m going to need to get a bit techy first.

I work most of my days in a language called C#. It allows you to put items together in classes, creating objects with properties and methods. Most of the reason I don’t like object orientation is the awful terminology, if I’m being honest, so let me explain a bit (professional programmers can – and probably should, if they value their blood pressure – skip the next paragraph, unless you’ve only ever used COBOL)

A class is kind of a blueprint for a machine. It has properties (dials, read-outs) and methods (buttons to press, slots to enter information). Plus it’s a specific shape so unless it uses a specific interface it can’t be put into the wrong place. Most importantly, like the TV remote, I don’t need to know how it works unless it breaks or I need to make a better one with more readouts or more buttons to press.

The important point though is that a class, like the “pre-conscious” part, hides what it does. So far, the job of building an AI, and getting to the point of “the singularity” simply looks like a problem of not having enough processing power, and the writing enough self-contained units (classes, class libraries or what have you) to process the raw “thoughts”.

And what occurred to me is that these programs are essentially describing ourselves. The methods of object oriented programming – building ever higher, more abstract levels of comprehension – are simply mimicking the human thought process. Eventually, surely, the more time we spend, the closer we’ll get to describing how we think, but in pure computer code, right?

Except…

Kurt Godel has this little thing called “The Incompleteness Theorem“. Essentially, he has mathematical proof that you can’t have a self-describing system of mathematics.

What he started out to do was to disprove this by creating a very simple self-describing system. Something he could mathematically prove every part of and use to describe itself. And it just couldn’t be done. Whatever you did to describe the system, it got more and more complex – but never described itself.

So his theorem simply is this – you can never describe a system from within itself, only from within a system more complex.

And what woke me up at 3am was the realisation that we’re quite possibly doing much that with modern computer programming – just as consciousness is an abstraction layer to the other layers of the brain, ignoring the in-depth work of the visual systems and focusing on the big picture – we’re creating layers in computer programming. We’re mimicking the way our own brains work in the way we write computer programs.

And that means that although we might be able to fully mimic the brain of a creature less complex than ourselves, there isn’t a way – not yet, not ever – that we can deliberately set out to create a machine more intelligent than the human mind. The only mind that could would already be far more intelligent than a human, anyway.

3 thoughts on “The Singularity vs Kurt Godel

  • Wow! This is from 2012! Coming here was basically a question of typing (almost word for word) the title of your post, because after knowing about Godel’s theorem immediatelly I was interesting in the discussion (I imagined) there would be in it’s implications for the idea of Singularity (as described by Kurzweil). Turns out, there isn’t much of a discussion. The Singularity has become even more than a cultural idea, a metaphor. It’s a religious and ideological fixation, just like the Rapture is for some (a lot) of Evangelicals. Your text confirms (from a nice eureka-like moment) that human intuition can also (and still) be a source of insight. And this humbling idea that Godel proved mathematically: that human reason has limits, it’s not divine. (When we think of it as absolute, as platonic fixed laws to be discovered it’s when, sadly science starts resembling religion).

    • simon

      That’s interesting that there isn’t much discussion – I think Godel’s incompleteness theorem has profound implications for the limits of AI that nobody’s really thinking about.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.