Tim Benjamin

composer, writer, storyteller

We are not prepared

Thoughts on AGI and the Fermi paradox

Almost anyone who has shared a few beers with me will have heard me go on about the Fermi paradox.

(For those who haven't, and/or don't know what it is: Fermi was at lunch with colleagues discussing the search for extra-terrestrial intelligence, and asked "So where is everybody?" - the paradox being that, given the age and size of our galaxy, if life can exist anywhere (and obviously it can) it ought by now to be evident everywhere (and it isn't). Obligatory Wikipedia here and a favourite book on possible solutions here).

Over the past year I've revisited the paradox and the solution I feel is most likely: that we're actually in a simulation, and in this simulation, we are on the only planet with life. You might object that a universe-sized simulation, even one with only a single planet of life, is ridiculously (perhaps impossibly) complex. But sometimes I go further and suppose that *I* am in a simulation, and everyone and everything else is part of that. I have no proof after all that you actually exist, dear reader, any more than you have proof that I actually exist - even if we were to meet. The simulation, then, only needs to simulate what I can see at any moment, and only needs a memory for the things that I do and the "people" I interact with - which is very far from the entire universe! Such a technical feat is eminently possible, already, to a reasonable degree, especially with a 6 to 7 hour maintenance window on a typical day....

Given that I have no proof that we're in a simulation, nor does it seem plausible for a sufficiently advanced simulation to allow proof of its existence to become known to those inside it, I'll proceed on the basis that we - yes, we! you're back! - are not in a simulation, but in the Real World.

That brings us back to the Fermi paradox.

One common set of solutions proposes a "great filter": something that every intelligent civilisation will encounter sooner or later, and which stops that civilisation developing further. There are various problems with "great filters", the most important of which is that it would have to be an extremely strong filter that didn't allow a single civilisation from any of our galaxy's ~80 billion Sun-like solar systems to bypass it in the past 13.7 billion years. Because that's all it would take, given how long we know it takes intelligent to arise on our sample size of one - if a single civilisation got past the "great filter", it would have had plenty of time to colonise the entire galaxy, and should be plain for us to detect.

Such a filter would have to be ahead of us - if we'd already got past it, so would plenty of other intelligent life-forms, given what we know about how intelligent life-forms arise. They've literally had all the time in the world!

What kind of great filter would be so strong as to prevent any intelligent life at all getting past it? A filter that would leave our entire galaxy barren, apart from the occasional spot in which a few RNA molecules combine and evolve for a short while, before they too get filtered? Even gamma ray bursts - for example - aren't really sufficient.

This brings me back to the past year or so. You know. GPT3, GPT4, LLMs, and so on. My beer-sharing buddies will know that I'm not exactly joyful about the fruits of these technologies. But I hadn't considered that they form a "great filter" - until I read what Eliezer Yudkowsky had to say about the latest iterations in AI, and the proposed six-month moratorium on their development.

Yudkowsky is one of the founders of the field of AI research. He knows what he's talking about. And what he has to say - in this recent article in Time magazine - is incredibly bleak, and I agree with him, for what it's worth. This seems to be (read the article, no need to take my word for it) a far more threatening problem than climate change, nuclear war, pandemic, and certainly the many silly "problematic" issues we get worked up over as a society these days.

Here is our great filter. Every intelligent life form sooner or later invents AGI, and every AGI, by the nature of what it is, sooner or later takes apart its inferior creators. Paperclips ain't going to make themselves (not at first anyway). Yudkowsky demands that we stop, stop now, before it's too late, but I'm not sure that's possible. I'm reminded of King Canute.

Do read the full article (it's not the most cheerful thing you'll read today, sorry-not-sorry); here's a short quotation to whet your appetite.

Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.

We are not prepared.

Then again - if he's right and I'm right, why isn't the universe full of AGIs? There's a small crumb of comfort for you. Fancy a beer?