LLMs are really good at k-order thinking (where k is even)

You still need to tell a language model you want to cure cancer before it can help you cure cancer.

Information Bounds in Quantum Gravity

How information theory links quantum mechanics and general relativity

Books I read (am reading) in 2025

A dynamic list

The nihilism of NeurIPS

Some thoughts on our future in AI research

Can quantised autoencoders find and interpret circuits in language models?

Using VQ-VAEs and categorical decision trees to do automatic circuit identification in LLMs.

Learning compressed representations and GPT-5 speculation

Why language models probably get too much from the abstraction we give them for free