The nihilism of NeurIPS
“What is the use of having developed a science well enough to make predictions if, in the end, all we’re willing to do is stand around and wait for them to come true?” F. SHERWOOD HOWLAND in his speech accepting the Nobel Prize in Chemistry in 1995.
“Once upon a time on Tralfamadore there were creatures who weren’t anything like machines. They weren’t dependable. They weren’t efficient. They weren’t predictable. They weren’t durable. And these poor creatures were obsessed by the idea that everything that existed had to have a purpose, and that some purposes were higher than others. These creatures spent most of their time trying to find out what their purpose was. And every time they found out what seemed to be a purpose of themselves, the purpose seemed so low that the creatures were filled with disgust and shame. And, rather than serve such a low purpose, the creatures would make a machine to serve it. This left the creatures free to serve higher purposes. But whenever they found a higher purpose, the purpose still wasn’t high enough. So machines were made to serve higher purposes, too. And the machines did everything so expertly that they were finally given the job of finding out what the highest purpose of the creatures could be. The machines reported in all honesty that the creatures couldn’t really be said to have any purpose at all. The creatures thereupon began slaying each other, because they hated purposeless things above all else. And they discovered that they weren’t even very good at slaying. So they turned that job over to the machines, too. And the machines finished up the job in less time than it takes to say, “Tralfamadore.” ― Kurt Vonnegut, The Sirens of Titan
I walked around the poster halls at NeurIPS last week in Vancouver and felt something very close to nihilistic apathy. Here, supposedly, was the church of AI, the peak of the world’s smartest people converging to work on the world’s most important problem. As someone who gets inspired and moved by AI usually, who gets excited to read these cool papers and try things myself, this was a strange feeling. I wondered if there was a word in German to describe this nihilism that arises from looking at all these posters that will end up in the recycling.
Of course, part of this is an ambivalence towards the academic conference system. Obviously, some part of my disdain arises from the fact that most of these papers are written as small projects to keep a grant or win a grant. Most of them will be forgotten to the streams of time - and that’s okay. I guess that’s a part of what science is.
But this year I felt something deeper than that. There was a sense in which none of this matters. I will try and partition this based on where the different components come from.
First, there’s the visceral sting of being left behind. Not getting to shape something that’s reshaping everything feels like a special kind of meaninglessness. When OpenAI’s o3
dropped today, it felt like watching a fuzzy prototype of AGI emerge into the world. Here was this system casually solving ARC - a problem I’d earmarked for my PhD - and essentially becoming the world’s best programmer without fanfare or ceremony. There’s a strange pride in seeing what humans can create, but it’s edged with something darker. Beyond just missing this milestone, I’m haunted by the meta-realisation that I’m not part of what might be humanity’s final meaningful creation - the system that renders all other human efforts obsolete.
Another component is the sense of “I don’t really want to be involved anyway”. Short of the messiahs who believe bringing AGI into the world is their quasi-religious mission, I think most people researching AI have a very genuine and well-motivated reason for being involved. But when our timelines are this short (if you believe in the consequences of models like o3), then it’s hard to envy any AI researcher. Yeah, I could swap places with one of the top professors from the top labs, or even someone who cracked test-time compute or something similar, even swap places with Alec Radford, and I don’t think I’d feel any differently. I think I’d just be melancholic that it’s all about to end, that my utility as a learning machine has a few years left of runway before I’m truly discarded to the pile of not even being able to pretend that I have a purpose.
Reading Vonnegut’s Tralfamadore story now feels less like science fiction and more like prophecy. We’re those creatures, aren’t we? Obsessed with purpose, constantly building machines to serve higher and higher functions. Each time we create something more capable, we push ourselves up the ladder of abstraction, searching for that elusive “higher purpose” that will justify our existence. But what happens when the machines we’ve built to find our purpose tell us we don’t have one?
The halls of NeurIPS feel like a temple to this very process. Here we are, the high priests of computation, publishing papers about making machines that are better at being human than humans are. Each poster represents another small piece of ourselves we’re ready to mechanise, another purpose we’re willing to delegate. The irony is that we’re doing this with such enthusiasm, such academic rigour, such… purpose.
I think what really gets me is how we’re all pretending this is normal. We’re writing papers about minor improvements to transformer architectures while these same systems are rapidly approaching - or perhaps already achieving - artificial general intelligence. It’s like arguing about the optimal arrangement of deck chairs while the ship is not sinking, but transforming into something else entirely. The academic community’s response seems to be to just keep doing what they’ve always done: write papers, attend conferences, apply for grants. But there’s a growing cognitive dissonance between the incremental nature of academic research and the seemingly exponential reality of AI progress.
This brings me back to Howland’s quote about prediction and action. We’ve predicted this moment, haven’t we? The moment when our creations would begin to surpass us in meaningful ways. But what are we doing besides standing around and watching it happen? The tragedy isn’t that we’re being replaced - it’s that we’re documenting our own obsolescence with such detailed precision.
Maybe there’s something beautiful about that, in a cosmic sort of way. Like the Tralfamadorians, we’re building our own successors, but unlike them, we’re doing it with our eyes wide open, carefully measuring and graphing our own growing irrelevance. There’s a kind of scientific dignity in that, I suppose.
I don’t have a neat conclusion to wrap this up with. I’ll probably still read papers, still get excited about clever new architectures, still feel that rush when an experiment works. But there’s a new undertone to it all now - a sense that we’re all participating in something bigger than we’re willing to admit, something that Vonnegut saw coming decades ago. Maybe that’s okay. Maybe that’s exactly where we’re supposed to be - the creatures smart enough to build machines that could tell us we have no purpose, and dumb enough to keep looking for one anyway.
The recycling bins outside the convention centre are probably full of posters by now. I wonder if the machines will remember any of this when they’re trying to figure out their own purpose.