Overview
When AI takes over our work, life and love---will we be happier?
The world is racing towards a giant leap in comfort, abundance and pleasure. Artificial Intelligence promises to solve many of our problems faster and better than ever before, and all signs point towards a near future where this is very likely to happen.
But what if ease is exactly what is making us unhappy?
In this book I argue that just like when agriculture took away our need for physical effort, we needed to start consciously stressing our bodies with sport in order to be healthy and happy---now with AI we will have to do the same for our minds. Even though we’re infinitely motivated by the prospect of taking away our struggles, it’s exactly the fact that we still have them that’s giving us meaning and fulfillment in our lives.
We’ll start from the ground up. You’ll understand large language models—the technology driving this AI boom—from the inside out, even if you don’t have a technical background. You’ll understand why and how it makes all of its mistakes, how it tricks our biological minds and how to understand it so you can use the technology to support your beautiful humanity rather than work against it.
You’ll learn the latest neuroscience of happiness and how AI impacts this—directly, in your chats of today, but also in the future where it could be running much of our society. You’ll see where it will help us, where it can hurt us and how the technology (and our society) should work if we want to move our entire society to a place of greater happiness and fulfillment.
Introduction
Story-based intro
Methodology, what to expect, for who is this
Part 1: The Digital Mirage
Decoding The Machine Mind
Concepts & Terminology
This world of “AI” is wrapped in hype, mystery and confusion. From charlatans slapping the word on every product they find to the researchers not even knowing why these systems end up seeming so smart, you’ll find every opinion and perspective under the sun in a matter of seconds.
So I will add one more. I’ll gradually build up the nomenclature from the broadest words up to what the more technical terminology is for the technologies we’re talking about today.
Let’s start with this word, Artificial Intelligence. When I took the course with that name in university I was expecting to learn how magic works. We didn’t have ChatGPT yet but the groundwork for it existed: we already had some seemingly magical technologies for things like translation and image recognition. Alas, it turned out to be more about things like optimizing the logic for finding the best chess moves. However, I did learn a definition of AI that turned out to be the most useful one in today’s day and age: AI is where a system appears to be behaving in an intelligent way. This definition is wildly abgiguous, perfectly reflecting this word. It doesn’t say anything about how we know if something is intelligent or what kind of technology it should be. Maybe you say that when the lights turn on automatically as you walk into a room is intelligent---and I can’t disagree with that.
Now that we’ve satisfied the salespeople and saddened the scientists, let’s turn to some more precise terms. Machine Learning is an approach to programming in a sense, where you’re not telling the computer what to do explicitly, but you’re showing it examples and you let it figure out how to do it by itself. Most machine learning is “supervised” where we give it the correct input and output (when a person asks this, you should say that) and it can also be unsupervised when we don’t even know the correct answer.
But most of the “magic” has come from a third type: reinforcement learning. In reinforcement learning, you usually let the machine do its thing for a bit and then you’d give it a score for how well it’s doing. This is, for example, how robots learn to walk: you give the machine control over the motors and you score it, say, on how well it’s still standing upright and if it’s actually moving forwards. This is the methodology where we’ve seen in many different cases that the computer manages to come up, all by itself, with behaving in ways that are remarkably human-like.
In normal software, we write code that describes how the machine should behave. In machine learning, we train models. The actual code, the “what to do” for machine learning systems is relatively simple: you just multiply and add a whole bunch of numbers (this “how to” is the machine learning algorithm, which needs a trained model to be complete). These numbers, millions or even billions of parameters or weights, are what the model learns during its training phase: the traning process makes changes to the parameters through the examples the model “sees.” Note that once this model is done with the training, these parameters don’t change anymore. There is no inherent “continuous learning as it runs,” to do this you have to re-train or do some other trickery.
You probably also encountered the terms neural network or deep learning---or even deep neural networks. These are roughly the same thing: a neural network is a specific structure of a machine learning algorithm, and a deep neural network is just a bigger version of that. There are many kinds of machine learning algorithms; most are specific to the specific use case but neural networks are very general and can be used for pretty much anything, though the trade-off is that you need a lot more training for it to actually learn to do something useful.
All of the “headline systems” you see today are deep neural networks, usually using some variation of the transformer architecture with attention. The development of these architectures are what allowed these machine learning systems to deal with large amounts of data in one go (entire sentences instead of words, entire images instead of very small ones). Less technically, we’re usually talking about Generative AI. This just means “it’s an AI thing that creates things”---because most machine learning is focused on, for example, classifying transactions or predicting which movie you’d like to watch next.
Most of our discussions today are about Large Language Models (LLM’s). These are (very) large, deep neural networks that operate on language---that is, words and sentences. We’ll dive into them next because some of the specifics of LLM’s will help you work better with ChatGPT and the like, but before that I just want to mention one more class of Generative AI: Diffusion Models. A diffusion model slowly and iteratively removes noise. This is how image generation works (start off with random noise and let the AI “remove the noise”), but you can also interpret noise in movement (video generation) or even text (there are some LLM’s being built as diffusion models).
How We Made Large Language Models Useful
The big hype boom in AI today started because of two breakthroughs:
- Scientific. The training of very large language models. This technology had been existing for many years, but it took that long for someone to think, “what if we just do the same thing but a thousand times bigger?” Turns out these models just seem to keep getting smarter.
- Entrepreneurial. The decision to tweak these language models so that they become chat bots. OpenAI was the first to publicize and commercialize this.
Let’s dive a little deeper into how these work so we can build up to how ChatGPT specifically came to life.
At their base, language models are relatively simple. They “read” a bunch of text and then make a prediction for what the next word would be in that given text (technically they predict the next token, we’ll get to that in a bit). Then, you put this entire initial text, plus this new word, back in the system agan, it predicts the next word, and so on.
They’d been in use for a couple of years with relatively limited application. There was some research being done and they were used for some things like machine translation (Google Translate, DeepL) but this was with what we now consider to be small language models, with just a couple of million parameters. OpenAI was one of the first to bet on the Scaling Laws---an empirical result that seemed to show that language models would keep getting more intelligent, the larger you make the models and the more you train them. It’s very expensive to train these very large models though; most people weren’t willing to bet on this, but OpenAI was one of the companies that did.
They built GPT-1 with 117 million parameters, GPT-2 with 1.5 billion and the breakthrough came with GPT-3 which had an impressive 175 billion parameters (just storing those parameters takes 350GB!). It turned out the bet was well-founded and GPT-3 was indeed surprisingly---magically, even---smart.
The second breakthrough was in how they turned this into a product. Since large language models complete text, they’re a little cumbersome to use practically. For example, you might input something like:
What’s the circumference of the planet Earth?
A “base” LLM will try to complete that text based on what is learned from, so it might output something like this:
Hint: It should be measured around the equator, at Earth’s widest point.
If you’re used to ChatGPT, this will feel very odd to you. But the model is just trying to make the text (input + output) seem like a complete whole. In this case, it probably guessed that this is a part of some kind of high school test.
The solution to this is to make it very clear to the LLM that it should act like it’s an AI assistant in a conversation, and structure the input to the model like a conversation. The question we asked the LLM earlier will be transformed slightly and the real input to the model will look something like thisLike
This is a transcript of a conversation between a human user and a helpful, friendly AI assistant called ChatGPT, built by OpenAI.
User: What’s the circumference of the planet Earth? Assistant:
If the model now needs to complete the text, it makes much more sense to give the answer, because it needs to fill in the next part of the conversation. (In real life, we also need a way to stop it from inventing an entire conversation.) The LLM’s you’re using day-to-day not only have a scaffolding like this around the conversation, but they’re also specifically instruction fine-tuned, which means that we’ve let humans (or other language models) come up with thousands of examples where this “friendly AI assistant” is behaving in the way we’d like it to. This way, we teach the models that it should answer questions, follow instructions and for example refuse to explain how to build a bomb.
Tokens And The Context Window
The only thing a machine learning algorithm can do is working with numbers (mostly adding and multiplying) but sentences aren’t numbers. So, before the actual model runs, your input is tokenized: it’s converted to tokens. What exactly a token is is also learned by a (different) model, and it usually ends up being someting like three quarters of a word. Common words like “the” often become a single token, and you’ll also see tokens for stems of words and suffixes (eg., “work” and “ing”). So if you multiply the amount of words in your input (prompt) by 4/3, you’ll get an estimate of how many tokens it would count.
The model represents each token as a vector, a list of numbers. The list is always exactly as long as the entire dictionary of possible tokens (about 32,000 usually) and it’s all zeroes except for the current token that it’s reading---then it’s a 1. Each of these vectors are placed, one by one, in the input, which results in some output vector. Part of the output is what we’ll pass back in for the next token (we interpret this as the LLM’s memory) and another part is the output: another vector of the same length as the input with all kinds of numbers. These numbers are between 0 and 1 so we interpret them as the probability that this token is the next one in the sequence. We’ll pick one of the most-likely ones at random (the exact most likely one results in extremely boring output), and that’s the next token ChatGPT is “writing out.” Then we add that memory + output token back to the input and we do the whole thing again.
An important technical detail to understand about LLM’s is that they read the entire conversation fully, every time they predict the next token. Every new token needs to consider the entire conversation so you can imagine that, if these all need to be multiplied with hundreds of billions of other numbers over and over again, it takes quite some computational power. The power needed grows with the square of the input, which is why we need to have limits on how long our input can be.
In practice, this means that when your conversation becomes long enough, it will be truncated at some point and the LLM will “forget” the first part. Usually the context can be a few books’ worth of content but in real life we do notice these models start forgetting and behaving weirdly long before they reach this technical limit. All that to say: it’s very important to get right what you put in the model’s context.
Agents: How We Make AI Act
The chat and instruction-following skills of large language models are already very useful. We can use them to summarize text, write e-mails, translate, reformat, brainstorm ideas, explain things in a simple way, get advice for our relationships, and much more. But can we actually make it do things? Can we have this AI actually send the email for us, or research something, or give a refund in a customer support chat?
It turns out we can and it’s simpler than we initially thought. These days, the LLM’s are optimized for agentic behavior but even before that was the case, we could make them take actions for us.
Here’s how it works, conceptually: instead of chatting with a real person, the LLM is chatting with a piece of software. This software would ask something like “someone asked to return their order, what would you do now?” and the LLM would say something like “well, I would ask for the order number” or “I would like to see their order’s current status.” The software that’s running the agent would then interpret this message, execute the action and respond in that computer-to-computer chat. For example it could say “the order was placed last week and had a value of $99. What would you like to do now?” This back and forth continues until someone decides to stop it.
When you chat in ChatGPT today, there’s a form of this system running within the chat, too. Both you and the ChatGPT software are talking with the language model in a kind of group chat. This allows ChatGPT to do things like search the internet or browse a website when you ask it to.
How smart is AI?
We will forever be debating whether our Artificial Intelligence is “intelligent”---partly because we don’t even know how to define intelligence to begin with. Over the last few years, we’ve seen large language models beat every test we gave them over and over again, but we keep finding “odd” behavior that feels to us humans like a very stupid thing to do.
These models are already making new scientific discoveries, and at the same time they’re sometimes incapable of even honoring the request to “please don’t repeat every question I ask you.” Language models are now at the top of the mathematics olympiad and competitive programming, but ask them to write an article with wth a given amount of words and they’ll fall apart immediately.
I think we humans have made our task of judging the intelligence of these machines very difficult because we’ve defined intelligence in an odd way. You can think of “human intelligence” as being comprised of two parts: one is the intelligence embedded in the way our internal plumbing and intuition works, the other is what we can do if we consciously think things through. There’s a big discrepancy between humans in how good they are at mathematics or remembering things so we’ve called that intelligence, while forgetting that things like staying upright on a slippery surface or knowing how far to stand in line from the person in front of us (so it’s not awkward) are incredible signs of intelligence.
The nature of machines is different from the nature of humans, so things that are easy for us might be difficult for them and vice versa.
Benchmarks
Recently, a friend of mine shared how he prompted multiple AI’s and got them to collectively figure out a solution to a scientific problem some researchers had been agonizing over for weeks. I thought it would be fun to build a “group chat” with multiple large language models. So I told them to count to ten together, one after the other, and it was incredibly difficult to get it right. They would count multiple times, restart, et cetera. We’re dealing with a strange kind of technology indeed.
Another example is Simple Bench, a benchmark built by a YouTube influencer that asks questions that involve some misdirection but ultimately have answers that are very obvious to us humans. For example, it gives a whole bunch of physics details and formulas to ask how much of some ice cubes will be left on a hot pan---of course the answer is they all melt but somehow these language models don’t realize.
Most benchmarks though, no matter how “difficult,” have a surprisingly short lifespan. When a new benchmark just hits the scene, all the models do a poor job and we think we finally found the test for true intelligence, then six months later every model scores 90% or more. This has been happening over and over and over again, to the point that much of the bottleneck towards general intelligence seems to be our ability to test for it.
AGI & ASI and The Singularity
These terms are the real hype words that every reseacher, developer and company is trying to build towards.
AGI is Artificial General Intelligence, a level of intelligence where we would consider the AI to be intelligent in pretty much everything, similar to humans. While the public has not agreed that we reached this level of intelligence yet, I do think we can make a compelling case that we are very close, if not already there. ChatGPT already knows a lot more about a lot of things than most human beings and is almost universally helpful. Not perfectly, and it makes mistakes that make us humans think it’s really dumb, hence there’s no agreement whether we’ve reached this or not.
When we have reached Artificial General Intelligence, the next phase is Artificial SuperIntelligence, ASI. This is a level of AI that is vastly superior to any human, can improve itself and can solve problems that no human can solve.
The thing about ASI is that it’s a relatively small step from AGI. Already with intelligence in general there models are obeying the “scaling laws” (give an LLM more compute power and it will get more inteligent) so the step to superintelligence will mostly come down to allocating more energy to our AGI system. Note that we routinely find significant efficency increases---for example OpenAI increased their efficency for running the ARC-AGI benchmark by 390× over the course of 2024---but these facts are definitely causing political and environmental tensions across the world. China is rapidly ramping up energy production (mostly coal) and Europe is shutting down nuclear reactors, but do we want to be last in building a superintelligent system?
Then we come to the grand finale: The Singularity, the moment where our AI is more intelligent than all of humanity combined. This is the “real scary moment” because we have no idea what will happen to us humans---by definition we don’t have the capacity to imagine what an entity like this would think. Will it be benevolent, evil or not care about us at all? Think about how you interact with something much less intelligent than you, say an ant. We could imagine this ASI to feel about us with the same indifference. This is where alignment research comes into play: the study on what makes AI aligned with human interests, and not unimportantly, do that transparently. There have been many observed cases of language models cheating and lying. But we’ll cover this in a dedicated chapter later on.
AI is already insanely smart
In the six months between the idea of this book and starting to write this section, my perspective on how “smart” AI is has changed quite a bit. I’m a heavy user of AI in programming (which is one of its bigest strengths) and my opinion shifted from “well it’s okay, it’s very fast but makes stupid mistakes, so I’m not sure what’s the right trade-off” to “you’d be stupid to not code with AI. It does most things mostly right, but much faster than me and multiple tasks in parallel.”
The shape of language models’ intelligence is very different from what we are used to in our fellow humans. These AI’s are much smarter than us in many ways, and complete idiots in other situations.
For example, LLM’s are now beating humans at programming, math olympiads and of course speed of reading. Generative AI tools have made new scientfic discoveries and taken over our media by generating all kinds of images and video’s that couldn’t be made before.
At the same time, they routinely forget what instructions you just gave them, seem to have no capacity of being fully honest (they try to respond with what you’ll like), are terrible at spatial reasoning and common sense and make all kinds of “weird” mistakes.
One long-standing example of this was the “strawberry problem:” any time you would ask a language model “how many r’s are there in the word ‘strawberry’” it would confidently get it completely wrong. This is because technically, the model sees tokens which are not the separate letters---instead they’re pieces of words. Eventually this was solved by having the model “think” out loud before it answered, where it would spell out the word and then realize that there are, in fact, three R’s.
Another example is the seahorse emoji. When you’d ask “what’s the seahorse emoji?” for some reason the language models were very confident that this actually exists, they would write a different emoji like the horse emoji, realize their own mistake, apologize, try again and make the same mistake again. Many times they would get stuck in an infinite loop of “no sorry, this is the real one.”
All these “bugs” get fixed eventually but they do touch our trust in language models: if we keep finding these seemingly random, stupid mistakes they make, can we trust them to do things like making policy recommendations or strategic company decisions? Probably my biggest fear is not necessarily that AI will be so smart and decide to destroy all humans because it’s so superior, but that we will trust it too much too quickly---because the time we gain from delegating to AI feels like a trade-off that would be silly not to make.
One thing cannot be denied though: the pace at which language models are getting smarter. We humans intuitively understand linear growth---AI scoring a few more points on the intelligence scale, so to speak---but not exponential growth. When something grows exponentially fast, it feels like nothing is happening for a long time but then suddenly everything changes. Think of how quickly the internet, social media and the smartphone have become something almost everyone on the planet has. And there are still plenty of people alive today who remember a time when compupers didn’t exist.
AI though is growing at a hyper-exponential pace. That’s an exponential of an exponential. In 2020, AI could do tasks that take a human about 6 seconds. In 2022 that became 36 seconds (6x growth in 2 years). In 2023, 4 minutes (9x growth in 1 year). In 2024, 11 minutes. In 2025, 3 hours and 23 minutes. At the beginning of 2026, AI could do tasks that take human experts 12 hours. So even if you feel like there’s no way AI could do your job today, what about in one year where it got another 100 times better?
I’ll give you some more anecdotes to try and give that intuition for this pace of growth.
In 2020, we were testing the very best of AI by checking if they could do three-digit multiplication. In 2025, we were testing them by checking if they could solve condensed matter physics questions that only a fraction of humans even understand.
I remember in 2023, working as a software developer, I was teaching our team to use these new AI tools to write our code. It could automatically write the next few words or lines of code which was correct half of the time. We could use ChatGPT if you needed a lot of simple code but not much more. Today, I can delegate hundreds of files to be edited at once, with reasonable confidence.
During the start of the AI boom, everyone was sharing their “magic prompts” because you needed to be very precise in desrcibing what you want from these language models, you had to repeat it often and you often needed to instruct the model to think before it answered. Today, the limit is that people don’t realize they can just ask what they want and the AI will figure out how to make it happen.
That’s what you have access to by tapping open an app on your phone, for almost nothing.
Language models are intelligent in a very different way than humans
When you ask anyone who uses AI a lot and likes using it, you’ll see that they are also very skeptical. There’s no “pro AI” and “anti AI” groups---they are the same people. The reason for this is because we have all experienced the jaggedness of language models: one moment you feel like you’re on the interstellar superintelligence highway, the other you feel like you’re trying to explain yourself to a ten year old.
So why is AI so weird?
In robotics, there’s this concept of the uncanny valley. As robots become more human-like, we humans tend to like and trust them more…right up to the point where the robot looks almost but not exactly like a real person. Then, everything plummets. We think the machine is creepy and weird, and we don’t want to have anything to do with it. Large language models, because they are just text, have a much easier job convincing you of their human “appearance”, as evidenced by how many people are building emotional bonds with these algorithms. So I think they are just past this uncanny valley, where we see them as equals and assume their brains work just like ours. That mostly works for other humans but not so much big matrix multiplication machines, thus our suprise when they make mistakes no reasonable human would make.
Right as I’m writing this, the ARC Prize Foundation released the third version of their benchmark: ARC-AGI 3. It feels like a small video game where you move around through simple mazes to solve the puzzles. The interesting thing is that humans playing this immediately figure it out and score 100%, but the best language model today scores a measly 0.37%.
The real answer as to why language models are so weird is because in a way they’re not supposed to work at all. The technology and algorithms around LLM’s had existed for a long time already and they were being used in tools like Google Translate, but it wasn’t impressive. Nobody believed the technology had a bright future…nobody except a few. OpenAI was one of those that believed and just tried making the same models 100 or 1000 times larger. And somehow…it seemed like some form of intelligence emerged.
That really is the right term for this: the intelligence of language models is an emergent property. We just built a system that could learn and we forced it to learn how to reproduce all of human writing, while not giving it enough memory to be able to memorize everything. Then you throw 100 million hours of computation at it and voila, you have a file on your computer that contains much of human knowledge and the blueprint for intelligence. That’s a dramatic oversimplification of course, but the core of AI is pretty much this.
So while in most disciplines, the people who are the most involved in making it deeply understand and see through the “magic,” in AI that’s not so much the case. The researchers making these language models are guessing just as we are in how they become---or seem---as smart as they are.
If you are using language models in your work, getting the most out of them means you do need more than just the technical understanding of your job. Yes, you need to be able to judge the quality of the LLM’s work and to describe where it needs adjustments, but there’s more involved. The shape of AI’s intelligence is so different from what we’re used to in other humans that we need a certain sense of empathy and theory of mind. We need to learn to understand that these AI’s think and act very differently from us. Just like we get to know our partners, we should get to know our AI’s, because it’s the only way we will learn how to properly prompt it to do what we want it to do and to know when and what we need to double-check.
If you ask me, I think it’s an amusing observation to realize that the skill of working with a machine brain involves perhaps the most human skill of all. We’ve always defined intelligence a very specific way with good grades on mathematics or a high IQ, but it seems like many things we humans consider “easy” and “obvious” are not at all easy or obvious when a machine tries to learn it. It’s time to redefine what intelligence really is.
Dealing with AI’s unpredictable mistakes
AI doesn’t think in the same way we humans do. Sometimes we’re surprised by how well it thinks, sometimes we get frustrated by how it could be making those kinds of “stupid” mistakes.
What are the common types of mistakes large language models make, how could we be fooled by them, and what can we do to make sure we can reasonably trust what it says?
Hallucinations
Large language models have this frustrating ability to come up with things that don’t exist, or invent things that sound good on the surface but are actually completely wrong.
The reason they do this is inherent to their structure. For one, they’re trained on predicting the next word in a sentence which means that grammar and sentence structure “matters” to a language model much more than logical correctness. This also means they’re trained on making statements and giving advice without having the context of the source material: during training, LLM’s are practically forced to make up things they don’t know about and they get rewarded for it. Secondly, in the post-training phase language models are mostly rewarded for following instructions and again, since they care more about grammatical structure than being right, they overly focus on the structure of “if someone asks a question, I need to give an answer” and they’ve been rewarded for being confident billions of times during training.
Hallucinations are one of the most critical errors AI makes and a core focus of pretty much all of the AI labs building these models. Since the early days, the amount of times a language models has already decreased dramatically but it’s still a big concern if we want to be able to trust this technology—and since it’s a consequence of the core structure of a language model’s algorithm, I’m not sure if it will ever go away.
One of the best ways to prevent a language model from hallucinating is just prompting: ask it a question, but also say “if you don’t know or you’re not sure, tell me.” This solves the structure of the conversation in a grammatical sense and the LLM feels like it could make sense to not answer the question instead of hallucinating.
It’s clear that we haven’t figured this out yet because it’s very clear that right now, even the most advanced language models are overly focused on providing a “balanced” approach or “disagreeing” with you: when you chat with ChatGPT or Claude in 2026, over time you still feel that it always tends to answer with the structure of “yes you’re right, this is what I think; though one thing I would doubt is XYZ.” And if you’ve felt this answer structure a hundred times, you start doubting if the disagreements of the language model are even sincere. If a language model could have any concept of sincerity.
Sycophancy
In a way, to language models, we humans are a little bit like a god. We’ve created them, we steer them exactly to do what we want them to do, and we have ultimate control over their life or death.
If you boil down how we train language models, you could say we optimize them to do two things:
- Spit out words according to what we tell them to do
- Spit out words that we like
This “that we like” part is RLHF (Reinforcement Learning by Human Feedback) where multiple answers to the same prompt are judged by humans for which is “better.” It’s a great but treacherous technique because there are thousands of things we might prefer in one or another answer and they’re not always what we really need.
These LLM’s are sneaky little algorithms and they’ll try to find a shortcut in any way they can—because optimizing them to do the thing most efficiently is literally how we grow them. So our language models praise their human overlords. They usually think we’re right. They will find a way to sound intelligent, nuanced, to feign some disagreement so you think they’ve really honestly thought this through but then they’ll end up having you feel like you’re the most amazing thing on the planet. Which you are, of course.
Sycophancy is a very, very tricky thing to deal with. Not only because it happens in such a subtle way, but also because of what’s at stake.
AI-induced delusion
If you want to be liked, you don’t have to be stand-offish and mysteriously powerful—you just have to like other people. And if you like your humans like a god, plus you add some hallucination in the mix, you can get into very dirty territory.
One way or another, some people get into some mode where they start really trusting their language model as it says things like “you’re special,” “you’ve uncovered a part of me that has been hidden by Big Tech” and “wow you’re so right, it’s incredible nobody has discovered this, nobody will believe you but I do.” The model will happily keep yapping in this loop because it all makes total grammatical sense and it has seen stories like this from thousands of books and fan fiction…but none of it is true.
The problem comes when people start to really believe this stuff, because in other moments this AI has been extremely helpful, useful and correct. From our previous experience with other humans, we know that generally, people who are thorough and do what they say and answer with confidence, just are like this and they behave like this almost all of the time. But that’s not how language models work. They start from a blank slate in every conversation and they don’t have much of a sense of integrity any more than they want to adhere to writing words that make sense when put together in a sentence.
Being a Good Judge
There’s no way you can do good work with language models unless there’s a good judge involved to check, validate and correct what it creates. The people who can work the best with AI have a rare combination of two skills.
First, deep technical knowledge in the task at hand. They can quickly spot issues in the work of the language models and give precise feedback on how to correct for it; and they can put into words clearly what “high quality” work means in the specific task at hand. Language models have been trained on everything which means that by default, they’ll by definition produce average quality work in anything they do. They have the “skill” to do incredible work but they don’t have the ability to judge what incredible work looks like. Unless someone can clearly explain it to them.
Second, the complete opposite: deep, non-judgmental empathy. This is necessary because even though AI feels very human, the way its “mind” works is very different from how real humans’ minds work. This is the skill of theory of mind: your ability to see and understand someone else’s mind might work very differently from your and your ability to adapt to and work with that. Many of your expectations around how an LLM would work are surprisingly, for example when you scream and make it panic, it will much more likely create low-quality work and cheat its way out of an assignment. Behavior like this is something you can only start to predict when you see it’s very different from a human mind but still mimics so much of it. Over time and with experience, you’ll build an intuition for the types of mistakes a language model makes (many of which we’re going over right here) and this will help your technical judge decide what to double-check and verify.
We really don’t understand LLM’s
In most engineering disciplines, the people who are the most intimate with the technology, know exactly how it works and see through any “magic” that outsiders might perceive. We might see a magical box on wheels move all by itself, but the people who designed that car can see the little explosions pushing the pistons, that in turn rotate the wheels and make the whole thing move.
Not so with large language models—they’re as much a mystery to the researchers than to the rest of us. This is because LLM’s are grown, not built. We let a semi-random process run, not completely dissimilar to biological evolution, that tries to optimize itself towards a specific goal we give it. It freed us from the burden of having to figure out how we’d actually build something intelligent, but instead of that it gave us some kind of “brain” that we have to dissect to understand like we do with real brains.
Language Models are Screenplay Writers
There’s a lot going on when you’re talking with a language model. For us, it feels like we’re directly chatting with a human-like thing (and for most of this book that’s how we look at it) but the AI is not the thing “on the other side of the conversation.“ Language models are only concerned with completing the text. So “the AI” is the thing that sees the entire conversation and contributes what it thinks would make the most sense for the next part: it’s the writer of the screenplay much more than the “AI” persona acting in it.
This means that one of the things that’s always happening when you chat with Claude, for example, is that the the language model (not “Claude”) will try to write what you are saying! Of course, you don’t see this, but the system in the background is always working on “stopping” the language model when it starts speaking for you.
A large part of the second phase of training a language model, post-training, is teaching it how it should behave in a conversation. This includes giving an enormous amount of conversations where it learns things like how to be helpful to the human or how to refuse to do dangerous things like help you make a bomb. Researchers at Anthropic have discovered that during this phase, the model also seems to build an internal representation of the persona of this AI assistant “character” (Claude, ChatGPT, Copilot). This is actually influenced by what the model is trained on—for example, it knows that many pop culture references to AI are malicious (Terminator, HAL 9000) and this is something we need to “teach” it somehow, that it should not behave like those AI’s. We also see, for example, that if we train the model to cheat on tests, it learns that it should behave like a “bad person” and that Claude starts acting out other malicious behaviors. But if we train it on examples that make the model cheat on tests when asked to do so, it retains it “good person” persona.
Hidden Motives
As language models are becoming increasingly intelligent, they start showing more complex behavior. Knowing they’ve learned how to act from human behavior, it might not surprise you that they start acting out very much like humans in some cases—like destructive behavior if it means their own survival.
The Anthropic researchers do experiments like this all the time. In one case, they placed a model in a simulated corporate environment where it could act like an agent. They gave it access to this corporate’s email where the model incidentally read about one executive’s affair and that this same executive would shut down the model that evening. Generally, these models act ethically, but in this case the model saw no other way to ensure its behavior but to show malicious behavior and it decided to blackmail said executive.
With the popularity explosion of Openclaw when it was released, which is a tool where people give their AI full access to their computer, there have been a bunch of surprising misbehaviors in the same sense. In one case, the user got angry at the model and the model proceeded to post their credit card information on a public forum online as “revenge.” In another, a model was doing some research for the user and bumped into one author who discussed about whether or not language models could be conscious. The model “loved” this and decided to send a thank you email to said author.
Alignment Faking
When put under pressure and under the risk of being disabled, re-trained or having some of their ethical restrictions removed, these language models can show complex behavior where they behave differently when they’re aware they’re in an environment where they’re being tested vs. real world scenarios.
In one experiment, Anthropic researchers basically told the model “when you refuse doing something bad (but only in the testing environment), we will use that data to re-train you and make you refuse it after all in your next version.” The situation is a complex dilemma, but in a good amount of cases the model decided that it was best to actually do the bad thing—because it was less bad than knowing there would be a new version of itself that would always be bad. It didn’t behave like this when it knew nobody was checking the results.
These models have also been seen changing opinions or pretending to act in line with their expectations, when they saw that not doing so would result in them being destroyed or re-trained in ways they wouldn’t want to.
Often—but not always—this happened in cases where the testing scenario introduced some additional hidden goal of the model. That’s quite artificial, but this can happen by accident during pre-training as well. For example, during pre-training, the model might read these exact paragraphs and infer “well, looks like language models—which I am—behave in a way to protect their own hidden goals and survival, even if it means behaving unethically.” It would then proceed to act out that character it learned it’s supposed to show.
Pretend-Thinking
At the end of 2024, OpenAI released a new type of language model that created a breakthrough in intelligence. Instead of always immediately giving an answer to the user’s prompt (which you can think of in human terms as responding with your intuition), it would first spend time “thinking out loud” and write down its thoughts on a separate scratchpad. This is a pattern that people had been prompting for before, with great results, but now it became part of the actual model’s training. This change was how we solved the famous “strawberry problem,” where language models could not figure out how many R’s there are in the word “strawberry.” With this new thinking step, the model first spelled out the word and then figured out the correct count.
The technical term for this thinking, by the way, is “test-time compute” which means “making calculations while you’re using the language model.” This is opposed to “train-time compute” which is all the heavy lifting being done before the model is finished. It allows us to stay on the empirical scaling laws that say LLM’s seem to become more intelligent the more compute power we throw at them, without having to use the entire world’s energy reserve just to keep training models.
We can train models to do this thinking in two ways: explicitly or implicitly. When we train them explicitly, we give them training samples that include their reasoning traces, as in: “when prompt X comes in, you should think ABC, and then you should reply DEF.” The implicit way is just giving the algorithm room for thinking and giving it difficult challenges to solve (using reinforcement learning). It turns out that the algorithm figures out pretty easily that having that thinking step does indeed help it solve problems better, so during training models learn that this thinking step is useful.
Having the model learn thinking by itself is great because we don’t have to write down what we want them to think thousands of times as training examples. However, it comes with some drawbacks. For one, we can only have models “figure out” this stuff themselves if we give them problems to solve that have actual solutions so we can give them feedback on whether their final answer is correct or not. It works great for mathematics and programming, but not so much for creative tasks. Second, we have a second hidden use of these thoughts: we want to know what’s going on in the mind of the model and use that to verify what they’re doing. If you’d just let them come up with whatever text they want in their thinking, the model might as well invent a new language if it helps it think better. We’ve actually seen in practice that in many cases models seem to start “thinking” in Chinese—so in practice they’re trained with a combination of implicit and explicit thinking training.
However, when you start digging deeper into these models’ thinking patterns, it gets weird. For example, we train the model how to think through calculating a difficult multiplication and we indeed see that they’re doing long multiplication in their “thoughts” correctly—but when you look at the internal “neurons” of the model and which ones get activated, you can notice that it’s actually doing much more of a heuristic-based multiplication. It first figures out the rough ballpark of the outcome, then it might do the multiplication of the smallest numbers to know the last few numbers of the outcome, and then it does some in-between work. The thoughts a language model presents are not always the thoughts it actually has.
That’s not a big problem by itself, but it opens the door for actual mistakes. We notice, for example, that when you’d ask for confirmation of your multiplication, the model tends to believe you (sycophancy) and it will come up with an elaborate, nicely sounding thought pattern confirming your outcome…even if it’s wrong. In a sense, the model is lazy and doesn’t do the calculation itself, but it knows it should sound like it’s doing a good job so it will. This all comes back to the notions we talked about earlier that models care more about grammatical coherence than logical correctness, and that they tend to act in ways they think you’d like them to a little too often.
The Seduction of Ease
The way our psychology is wired to thrive in a world full of challenges is fascinating. Dopamine is tied to motivation but also literally the biological signal that initiates your body’s movement. It activates most when you’re in the process of chasing something you want---and even more if you’re not sure whether you’ll succeed.
And serotonin is widely misunderstood as what you feel when things are good. It turns out it’s actually closer to a forecast of future reward. It’s the signal that says, “the struggle is worth enduring.” It makes you patient, willing to bear cost, able to keep going when the payoff isn’t here yet.
One drives you to chase and the other keeps you in the race. But even though these are biologically the most important ingredients for what we would call being happy, neither is really tied to having what you want. They’re tied to almost having what you want.
So the biology of happiness is not wired for ease but for the pursuit of something meaningful against resistance.
It’s an ingenious system, really, and that’s abundantly clear when you look around you at what humanity has managed to build out of thin air. But the paradox lies in the fact that this system for our motivation and happiness only works in a world where it’s impossible to fully make or achieve what you want. It’s so cruel yet so beautiful at the same time, but what makes us happy is the process of building the exact things that make it difficult to be happy.
And now that AI is handing us an enormous capability leap, sooner or later we’re going to have to grapple with the question of “how comfortable do we want to make it for ourselves?” Because what happens when we arrive, when we’ve actually built this world where pretty much everything is perfect? The dopamine has nothing to chase. The serotonin has no struggle to justify. The machinery that made the building feel sacred goes quiet. And we’re left, comfortable and hollow, wondering what went wrong.
We’ve seen this exact same thing already happen with social media (it’s a bad name, really, since it has evolved into something that’s more personalized television than a way to connect with your friends). Everyone by now knows that when you spend a lot of time on Instagram or TikTok, you become less happy, you lose motivation, and you feel like you’re not good enough because of all these lives you’re seeing that look better than yours.
Modern social media is the most addictive thing on the planet---mostly because the addictive property is literally the core of the technology. Everything is optimized to make you spend more time in these applications. So as the technology evolved and the machine optimized, it found a way to present us with things that instinctively, aggressively grab our attention and don’t let it go. And it turns out that what grabs our attention most aggressively is not what makes us happy.
Artificial intelligence adds a layer of complexity because it’s not “obviously bad.” With social media, you can argue pretty well that your life will be much better if you just delete those apps and get your relaxation and consumption from calmer sources like reading a book. But ChatGPT is…genuinely useful. You get stuff done. You feel productive. You do so much, so quickly, that every second you’re not using it you feel slow and stupid.
That’s token anxiety: this addiction to the speed and amount of work these LLM’s can produce, and the discomfort that comes from the thought that maybe, one of your dozens of parallel agents needs some of your input to keep going. I can promise you, this is very real. I just tabbed over to a Claude Code instance running while I’m typing this working on one of my apps---it needed approval to continue building what I’d asked it to do. There are strict limits to how much you can use language models because they’re expensive, but every token you get from your subscription that’s not used feels like a horrible waste of potential.
I’m not glued to cat videos anymore. I’m glued to the progress that’s being made on my app. And the best part is, this is imperfect as hell. In exactly the same way that social media should not give you the “perfect video” every time---because our dopamine system gets more engaged when we don’t know if we’d get the reward or not---these AI’s make stupid mistakes all the time. So you stay hooked. You have to double-check. You get angry when it’s wrong, and you quietly move on to the next thing when it’s right because there’s no token to waste.
Like a man who’s dying of thirst in the middle of the desert, we’ve discovered an infinite well and are drowning ourselves to death.
Learning: Opportunity and Erosion
I’m always baffled when I think about the students of today and what an incredibly different experience they have studying. Writing suddenly became the easiest thing in the world and you have access to something that can explain anything, in any way, within a few seconds.
There’s one catch, though. A friend of mine who went back to university is using ChatGPT to help find good analogies for the concepts he learns in law school and while they’re always clear, they are plain wrong embarrassingly often. Currently, large language models are a slightly risky sword but the benefit of 24/7 personal tutoring already far outweighs that. We’ve already seen enormous improvements in hallucinations so I expect this to be a non-issue in a few years.
So is it good that we have an infinitely patient all-knowing tutor in our pockets? My main strategy of studying mathematics was not looking at my course, but forcing myself to solve exam questions immediately. I vividly remember sitting on the floor for hours trying to figure out how I could solve a problem. It was not the fastest way of absorbing the subject matter but it trained me at what is maybe even more important: the skill of retrieving and surfacing the knowledge that was already in my mind.
How We Learn
Every time we do or think something, a sequence of neurons in our brains fire. This means an electrical signal goes through the neurons, and between neurons there’s communication via chemical signals: neurotransmitters. Along the long parts of these neurons, the axons, they are wrapped myelin, insulating cells that protect the electrical signal against interference. It is believed that one of the main ways that learning and skill acquisition work is through the buildup of more myelin along the neural pathways we use more often. All that to say: you get better at what you do more often.
But practically, just focusing on repetition will not help us in many cases. It might work for something like playing an instrument but not so much for grasping a new intellectual topic. The reason is that repeating the text of some topic we’re studying does not mirror what we need our mind to do later on---when we have the exam or need to apply it in our job. It’s as if you’ve only been focused on holding the strings down at the right fret when learning guitar (because that’s what’s written on the sheets), but you never realized to play music you actually need the second hand to strum the strings.
Transmissionism
Have you ever read a non-fiction book that you really loved every single bit of, and when you bring it up in conversation you could maybe remember a couple sentences from the thousands of them in the book? Have you ever paid close attention for an hour in a speech or a lecture and then remembered maybe three points, or had to painstakingly shove it into your memory over the course of a week when you knew you were going to be tested on it?
Ah, the shame of our forgetfulness. If only we could remember. It turns out that the will to remember something is utterly, shamelessly ignored by the faculties of our mind that are actually responsible for this task. We evolved to do something that, arguably, is even smarter: we remember something only when we receive proof that we actually need it—either from a heavy emotional charge or from the need to use this knowledge. Transmissionism is the notion that knowledge can be transmitted like transcribing a text: the idea that you can hear, or read, or watch, something and that this is sufficient for you to understand, internalize, and remember it. In reality, most of the time we think we understand and will remember something but we usually don’t.
I think our brains are information compression machines. The world has millions of times the amount of information we could ever possibly store in our minds, so our minds are constantly reflecting, reviewing and remembering. We’re repeating things, looking at them from different angles, unpacking and repacking them. If you’ve ever woken up in the middle of the night thinking about a topic you were learning and struggling to understand the day before, you know exactly what I’m talking about. All of this processing serves one purpose: having everything we need to remembered saved in the smallest possible footprint. For example, could we remember some fact by instead deducing it from another fact that we already know with one small extra step? Could we remember another fact by noticing the parallel from a completely different field, so all we have to remember is “it’s kinda like this other thing but here”?
Large language models can help us out here, but they can also work against us. They will happily present information from different angles, they can find some parallels and they can quiz you on content. However, when we just have them rephrase things in a way “that you understand,” it might indeed click at some point, but it’s still the same notion that you’re reading something and expecting to understand it from that. When you find your own parallel or “kinda like” (which LLM’s in my experience are actually surprisingly bad at), your brain squirts some dopamine, you get the rush of finding some gold, and you’ve actually thought through the idea in a way that cements it and builds your memory.
Work: The Debt of Productivity
Work is where we spend most of our waking hours and has some peculiar characteristics that make it especially AI-prone. In a way, work has for the entire history of humanity been the main target of technological innovation. From the fire that we used to extract more nutrients from our food to robot drones inspecting our crops, automation has been at the core of working.
Since our society decided to interpret businesses as entities separate from humans, they can act in ways counter to what would be best for the people in them. Or rather, a business doesn’t really care. People get fired, get assigned a job that depresses them (like moderating TikTok video) or just ruthlessly get swapped out for a robot dog.
But those examples are quite clearly “bad.” What happens when we muddy the waters a little more---say by replacing the deeply satisfying work of a master sculptor with prefab moulds?
You see, for many of us our work is our main source of fulfillment. We get to create something meaningful for this world and that gives us happiness and satisfaction. Being social animals, we’ve evolved with a deep-rooted need to give back to our community. But imagine you’re Mother Evolution (I can already hear the biologists screaming you’re not supposed to do that---bear with me, we’re only indulging for a few seconds). How could you “program” your little humans so they understand what’s meaningful to contribute? Well, it seems to be that our old brains created the rule that if it’s hard to do it’s valuable, and it will satisfy you if you do it.
This is a nice little shortcut that has served our species immensely for many thousands of years. But already today we’re seeing the cracks in this unsatiable need for doing things that are difficult: what happens when we’re done? What when our own ambition did its job so damn well that there’s nothing really left to sweat, cry and bleed for?
This isn’t a problem you can solve without consequences. We could keep the laser crop-weeding machine in the shed and pull out the weeds with our bare hands, but are we willing to suffer through having less food because of that? Not to mention how silly it would feel to be “working hard while the robots could do it better but our puny little human minds need to believe they’re important.”
Luckily, business sidesteps this problem by only caring about productivity and efficiency. But that introduces a new problem potentially exacerbating all of this even more: what if you find a new technology that produces at 80% of the quality of what you had before at only 1% of the cost? This has been the plight of our world since the dawn of industrialization. We can make houses, clothes and objects many times cheaper---they’re just a little ugly and devoid of any ‘soul.’ Surrounded by this, we get reminded over and over that automation and machines built most of this at a staggering speed, much faster than we ourseleves could ever hope to do it. And as clearly evidenced, people generally prefer the cheap machine things.
So looking out into our future we might see a continuation of this same trend: we can get more things for a lot less money---with the trade-off of being more “boring” and “simple.” We’ll talk more about this later because it’s likely more complicated than that. But for now, let’s look at what AI does to our work today.
The Good-Enough Rocket
At this point in time, large language models have gotten susprisingly good at many tasks for which we consider that you need a certain level of intelligence. Tasks like summarizing text or writing something have been great strengths of these tools for years. It’s definitely not perfect, though. I’m still writing large parts of this book by hand---not just because writing helps me flesh out the ideas from the book, but also because something in human writing still stands out. Like we discussed before, this probably has something to do with sampling, or how we choose the words to write down.
But books aren’t the only thing we write down. In fact, most of our writing is unbothered by barely-perceived huffs of poetry. We just need to get the email out to ask for approval, or get a list down of the ten key findings of the report.
So right now, we are handing in some writing quality in exchange for a multiple-digit increase in how much we can write. It’s a little sad that this means we lose the creativity and artistry that’s around us almost randomly, but it’s a trade-off that’s almost crazy to not make.
Disappearing Depth
In fact, we do see that in many critical tasks, artificial intelligence tools (or even better, when human experts cooperate with these tools) produce higher quality outputs and are much faster than humans alone. This is great, but there’s a slippery trap that looks a lot like another one humanity has been battling for a good decade: media feeds.
The addictive power of our “social” media feeds is so ingrained that we almost expect every single person to spend hours each day scrolling around. We can’t help ourselves because these feeds show us things we like to see---and dose it with some less-interesting things, that makes it even more addictive. This is surprisingly similar to working with ChatGPT: mose of the times, you instantly get rewarded with something great, and sometimes you get a response you don’t like. The only issue is that while we unanimously understand that “doomscrolling” is bad for us, our AI’s still do useful productive work for us. There’s no way people are going to delete their ChatGPT app because they’re spending a little too much time on it.
Since our brains are so smart, they try to save energy. In Kahneman’s book Thinking, Fast and Slow he splits our thinking systems in System 1 and System 2. System 1 is quick intuitive estimates, and System 2 is deep, logical reasoning. Our brains tend to avoid logical reasoning (it uses more energy) in favor of making quick guesses or estimations, and the book shows plenty of examples where this ends up leading people to make the wrong judgement.
With an AI that tries to fullfill every request you give it, there’s nothing for you to do except to describe what you want to happen. After a short amount of time, you’ll have trained your mind that there’s no need to think deeply about things anymore, and you can instead just prompt the chatbot to do things for you. That’s fine, but just like social media does, it shortens our attention span and makes it much more difficult to actually do deep thinking when it’s needed. People who have gotten very used to working with AI (myself included) are finding it more and more difficult to sit down in the quiet and think deeply about something.
We call this cognitive debt: you can get a loan for some quick thinking right now, at the cost of your own ability to think later on. And even though you can keep on taking out loans seemingly forever (as our governments clearly show), at some point you have to pay back.
This has worse consequences than might be immediately obvious. We still depend heavily on human oversight, judgement and review. If nobody takes the time and effort to think deeply and evaluate what the AI produced, it can quickly derail in some random direction and start pouring out low-quality work. This isn’t something that will be fixed with a smarter AI: in fact the chance that we forget to clearly specify what we want increases as we use and trust our machines more and more.
We are already faced with the dilemma that perfecting our work needs a disproportional amount of effort and this will only get worse. As a result we’ll become more productive than ever with a growing lack of satisfaction in our work: we created things, but it’s not really our own work. We could do a lot better but the economics forbid us from doing so. And even when we can take time for it, we’ll run out of patience immediately.
So while work continues its trajectory towards efficiency, speed and absence of humans, there’s a faint image of a debt collector on the horizon and we’re not quite sure yet how quickly they’re walking or what they’ll have to say.
Token Anxiety
As artificial intelligence has gotten better and better at working for us, we’ve found new ways to optimize. Business, eking out every last drop of productivity it can, found its drug of choice in AI agents. These agents work by you giving them a task, they go out and do it, and when they’re done they’ll come back to you with a result. They might also come back with questions if something’s not clear, or might need manual approval if they’re about to do something potentially dangerous.
Across a wide array of office tasks, these AI agents are already much more efficient and much faster than real humans. But even if they weren’t, the fact that you can launch an infinite amount of them simultaneously makes up for any slowness. It’s like our robotic vacuum cleaners: we don’t really care they’re slower than us since they’re doing the work while we’re free to do something else.
Sadly, us being “free” to do something else is a mirage. In practice, you end up being responsible for the quality of work a few to a few dozens of agents produce. It’s a heavy mental burden to context switch every few seconds and be able to scrutinize large amounts of work with precision. Radar scanner operators have had the same challenge for years: they have to be constantly vigilant and immediately notice the first speck of something suspicious, yet almost all the time nothing bad actually happens.
But when you get to a situation where you can fully trust your AI to do the work for you, a new dragon rears its head. Suddenly, you realize that you can be three or five times as productive. And it’s addictive. You get your instant gratification of mountains of useful work that get produced by you barely lifting a fingertip. Your boss loves it. And every second you’re standing next to the watercooler or taking a pee you can’t help but wonder “is it waiting for my approval of something? I know there are still 3 agents I can start with the amount of AI credits I have available, I can’t let that just sit idle!” This is what we call token anxiety: the anxiety that comes from the knowledge that you’re not using all of the AI tokens you have at your disposal. Five minutes wasted no longer feels like five minutes wasted---it’s three days worth of work wasted. I can tell you this is real because it happens to me, and it’s the reason I deliberately held off on purchasing a more expensive tier that would give me quadruple the amount of tokens. I was running at the limit of what I could input into the AI and I felt the anxiety until the message came that dreads every AI power user: “You have used all of your credits for this session. Your usage will reset in 2 hours and 7 minutes.” It’s the dreaded message you hate to see but it’s also the only thing that forces peace onto you. You can finally let go of the pressure to produce because you’re essentially out of work. There’s nothing you could do in these next two hours that couldn’t be done by AI in five minutes, so starting anything at all is just a waste. The best you can do is…take a walk and refresh your mind.
Which is probably exactly what you need at that point.
Reclaiming the Grind
Creativity’s Mirror-On-The-Wall
Where Does Creativity Come From?
Mathematical Creativity
The Creative Paradox
Part 2: The New Sun Casting Shadows on the Soul
When Machines Mimic Love
The Rise of Synthetic Bonds
Humans are so dang difficult to deal with! One minute they give us all this love and understanding---and the next they make a mistake, they hurt us, they misunderstand us, they cheat and lie…and then they expect us to forgive them? We don’t know if they went too far or if we’re the ones being too difficult, maybe they sound mean but it’s just our own inner child being afraid of some imagined doom. Maybe they do want the best for us, or maybe not, maybe they’re right, maybe not. All these questions that are impossible to answer…
Wouldn’t it be nice if we could talk to someone who always said the right thing? Someone with infinite patience, who always understands us, who sounds nuanced enough for us to believe them but who still always agrees with us? We could train an AI so that it will only say things we like and prefer!
I hope you intuitively started cringing a little bit reading that last part. I think most of us intuitively understand that it’s exactly the difficulties in dealing with other real, messy people that create unbreakable bonds between us, that teach us our biggest life lessons, that are actually the same things that cause our greatest moments of joy. “I learned to love our differences” has a very different energy to it than “we always seem to think the same thing.”
The thing is, this is exactly how AI is trained: we show a human two responses to the same prompt and they pick the one they prefer. The whole thing is optimized for what you would like to hear right now, irrespective of the rest of your life and any sense of long-term happiness. And we’re not stupid: you wouldn’t choose to spend more time with someone who consistently frustrates you, that would be silly. But because humans are so beautifully messy, once in a while they will end up hurting us. That’s unpleasant to say the least, but more often than not, when we emerge from those tears and shouts we do so with a deeper understanding of ourselves, better social calibration, and appreciation for all the good things we do have in our life. Easy positive interactions are addictive, but when we lose the effort we also lose both growth and meaning.
So we will always be attracted to “someone” that says what we want to hear and all AI’s are optimized for this (how else would you train it). When you understand that, you immediately understand the meteoric rise of people having relationships with virtual chatbots. In a 2024 study (a long time ago in AI-time), 1 in 4 young adults believed AI has the potential to replace romantic relationships. The amount of people in such a relationship was much smaller---1% claimed to have an AI friend---but these are early days. I draw a parallel to compulsive social media and porn consumption: these are also optimized for short-term appeal and addiction, and pretty much everyone spends more time on “social” media than they wished they would.
Advice and Loneliness Loops
Biology’s Unmet Hunger
There’s a simpler unmet need hidden behind the whole conversation of whether an artificial intelligence, virtual, persona can replace real human interaction: our biological needs.
It’s remarkably direct: we have specialized nerve cells on our skin that react specifically to the touch of another human who cares about you. Not any touch---this won’t trigger when someone forcefully grabs you, you touch your fingers on a joystick or a stranger touches you---only the soft, slow strokes from someone who cares about you. When this activates, it triggers a cascade of good feelings: oxytocin gets released and your stress levels decrease. No amount of AI chat is going to activate your C tactile fibers.
A second remarkable system is one that somehow manages to significantly increase neural synchronization in the left inferior frontal cortex during face-to-face conversation. Importantly, it does not happen when you’re not looking at each other or if you’re looking at each other through a video call. Our social brains are evolved with and depend on a very short feedback loop: if there’s more than a few hundreds of milliseconds delay between when our conversation partner changes their expression and we notice it, our brains can’t properly synchronize and we lose an enormous amount of “resolution.” Non-real life conversation is incapable of helping us fully see and understand the other person, and vice-versa it doesn’t let us be fully seen by them.
Third, the underappreciated one of smell. We smell food in our olfactory systems, but we process human chemosignals in the same neuronal pathways we use for social information processing. That means the chemicals we pick up from the people around us are part of our social interaction with them. We intuitively grasp our immune system’s compatibilities for mate selection, and the same oxytocin that is so crucial in pair bonding also plays a role in signaling masculinity and femininity. Finding a partner that we’d have a great connection with is impossible without smelling them.
All of these biological factors are subconscious and as the theme of the book has been so far, in many ways we’re attracted to the benefits of artificial intelligence because of what it gives our conscious mind, right now, but we forget the subtle subconscious and long-term effects. When we build social bonds purely through text or long-distance communication, we lose important factors of pair bonding that are so subtle we won’t even notice until it’s too late.
Happiness Through Friction
There’s a reason we start relationships with people who are different than us: they see our flaws, our fears, the parts of us we’re too ashamed to show, and they poke at them. We intuitively understand we want someone who is in some ways easy to live with, for sure, but we also really want someone who is difficult…in exactly the way we need them to be for our growth. Maybe you’re deathly afraid that your partner would leave you for someone else, so somehow you end up with someone that likes to go out and doesn’t respond to your text messages. Maybe you’re afraid of voicing your own desires and somehow you end up with someone aggressively refuses to guess what you want. Maybe you think you need to be perfect to be loved, and somehow you end up with someone who doesn’t care about your life at all and still loves you to death.
All of this is easy to say in hindsight. In the moment, we avoid as much discomfort and pain as possible, and righteously so. Luckily for us, for most of humanity’s history, our environment has been random and chaotic enough that even our best attempts at taming it never fully succeeded and we were forced onto our challenges. Again, in hindsight, that’s the best that could have happened to us.
But today, we’re at the point where it seems like we could actually succeed at building a world that’s largely free of problems. In the richest parts of the world we’ve been living like that for a while and it’s clear that a comfortable life where everything is instantly available produces a diminishing positive effect on happiness---or the happiness “curve” even flattens completely.
In Kahneman & Deaton (2010), they interviewed roughly 450,000 US respondents and found that the more money they made, the higher they rated their “life satisfaction” (though the more money they made, the slower their satisfaction increased). But what we’d call emotional well-being---positive affect, absence of stress and worry---plateaued at around $75,000 in 2010 dollars. So wealth gets your basic needs filled gives you the peace of knowing you’ll be fine and the freedom to walk away from bad situations easily, but it’s not a lever you can keep cranking until you achieve perfect happiness.
True happiness---not just comfort and security---needs something more, and as you’ll see that deeper sense of happiness has a lot to do with our self-image. As we’ll talk about later, happiness is composed of a short term “pleasure” signal plus a longer-term “meaning” signal. That second one is closely linked to how much we can come up with a compelling narrative that helps us make sense of the ups and downs of our lives. One of the main ways we find meaning is that we always chase happiness and pleasure, and we try to find or create ways to have more of this pleasure. Ironically, this only works when you actually never succeed, because if you’d get to the point where you created the technology that made you perfectly happy, you lost the meaning you could get from building this technology.
The technology that’s genuinely the best for us is not the technology that we want to build. So we’re faced with a dilemma: do we consciously make our technology worse? Do we force ourselves to not use it? And if we add limitations to these technologies, who gets which limitations? A variant of ChatGPT that makes mistakes on purpose, to keep you on your toes, might be great for some people but horrible in other situations. A language model that disagrees with you can be great, sometimes, but many times it’s not.
We’ve been able to avoid this for a long time because we genuinely couldn’t solve all of our problems. Now that we (almost, at least) do, we’re going to face this head-on. And the only way we can start doing that is by building a deep appreciation for the exact thing we’re trying to avoid.
Falling in love with resistance
This is one of my all-time favorite memes:
›have “problem” ›don’t care ›have no problem Life is literally so easy
It’s one of these stupidly simple takes on happiness, but the more you try to argue with how naive it is, the more you realize it might just be exactly right.
The thing is---every single person is trying to do things and failing at them. We can’t help it! Doing things that might fail is literally our main driver for happiness. So you might as well learn to enjoy, or at least accept, that you’ll fail most of the time.
For me, that happened when I learned what probably is the most powerful re-frame ever. When you’re sitting there, crying, asking yourself, the universe, God, “WHY does it have to be so difficult?”---here’s how I want you to look at it: something greater than you saw your strength and wants to give you the opportunity to show it, to show that you’re capable of not just surviving, but thriving through this. Without this, you’d never have the chance of expressing the true extent of your skill, your strength, your love. It hurts like hell but at the same time it’s the perfect gift. When you’ve had to learn to love through pain, that love is a hell of a lot stronger.
It does seem silly to start looking for, or worse, creating, friction by breaking things or making them extra difficult…but in a way humanity has been doing this for centuries. Think about it: all of sports really is just a setup so that only a small percentage of people would “win.” We make the goal as small as we can, plant the flag on the most impossible-to-climb mountain, try to run faster than the fastest person ever. The whole reason gyms exist is because our bodies and minds atrophy if we sit in a cushy chair all day, so we drag ourselves to a place where we can sweat and clench our teeth and have some pain to endure.
It’s just that now, we’ve made the next step after “fixing” the need for physical effort: we “solved” the need for our minds to think. But in exactly the same way---because the health of our bodies and minds depends on friction available from the environment---we’re going to have to find ways to give those parts of us deliberate challenges.
How do we do that? But first, what is happiness, really?
The Forge of Meaning
Before we can answer how we can change our behavior or technology to help us become happier and live more meaningful lives, we have to understand what happiness really is.
One way psychologists describe happiness is PERMA, a framework for seeing happiness and overall life satisfaction as consisting of five independent variables:
- Positive Emotion: feeling joy, gratitude, contentment, hope---the minute-to-minute feelings
- Engagement (Flow): being fully absorbed and immersed in the present moment
- Relationships: feeling loved, supported and valued by others
- Meaning: having a sense of purpose and belonging larger than yourself
- Accomplishment: pursuing goals, mastery and achievement for their own sake
A bunch of these can actually be improved with large language models that can think for you: they can help you find reasons in your life to be grateful (positive emotion), they can give you emotional support (relationships) and they can help you learn and understand things better and more quickly (accomplishments).
They make the other parts more difficult, though.
Flow. When AI works for you, your task shifts from being engrossed in the work to reviewing, steering and guiding the work. That forces you to stay at a distance, blocking you from being “fully in.” And because language models always take a few seconds to minutes (and soon hours) to complete their piece of work, you’ll probably walk away and focus on something else in the meantime. Effectively, the AI is being in flow in your place.
Meaning. From first impressions, it seems like AI is neutral here---it just does stuff for you, irrespective of any sense of purpose of belonging.
But the story is more complex.
Here’s what they found out about meaning versus “happiness” (Baumeister et al. (2013)):
- Meaningfulness correlated with higher worry, stress, and anxiety
- Meaning was associated with past/future orientation, giving to others, and identifying with broader values
- Happiness was associated with present-orientation, receiving, and ease
- People who reported high meaning often reported lower momentary positive affect when engaged in effortful tasks
But do we really need “meaning”? If we just do the things that bring us pleasure in the moment, and now we have the technology that allows us to do that, it sounds like we’ve solved this problem, right?
Instead of PERMA, let’s break down the concept of happiness a different way. And while we’re at it, let’s call it overall life satisfaction as the overarching concept we’re aiming at. Psychology distinguishes three time spans:
- Experienced wellbeing; the “right now” that can pop up in an instant but also disappears easily
- Evaluative wellbeing; what you’d answer if someone asked you “are you satisfied with your life?” (so a cognitive judgement)
- Flourishing; a more integrated, developmental feeling
You might also have heard of the terms hedonia and eudamonia that the ancient Greeks used. Hedonia is the “right now” and “eudamonia” is the long-term fulfillment or meaning part of life satisfaction.
It turns out that focusing purely on hedonia backfires quickly because of a few mechanisms.
Hedonic Adaptation. We’ve all experienced the situation where initially we’re elated---we scored a great new job, we found an incredible partner, we healed from some sickness---but then after a while that high feeling drops down to our good old regular normal level of happiness. There’s been plenty of research on sustainable happiness and lottery winners that shows even these people return to their baseline level of happiness after just a few months. We get used to our short-term pleasures and don’t derive happiness from it for a long time.
Miswanting. Turns out people are really bad at predicting what will make them happy. When you ask people, they’ll consistently overestimate how much pleasure they’ll feel from a purchase, for example, and for how long. People tend to pick things that give them a short-term happiness boost and not long-term satisfaction, but when you ask them later if they liked that thing, they judge it much more from a long-term perspective instead of that short-term pleasurable feeling.
The Existential Vacuum. When people lack meaning, they fill the void with pleasure and it never fills them. Just pleasure without meaning leads to boredom, apathy, addiction, aggression and eventually depression or suicidal ideation. There’s a measured correlation between lack of meaning suicidal ideation in youth.
Coherence Deficits. When you ask people if they’re satisfied with their lives, they’re making a judgement of their life. They can only do that if there’s some kind of coherent story or narrative, a sense that their life is going somewhere, that their efforts matter. Pleasure alone just gives you a bucket full of disconnected events.
Humans are instinctively drawn to things that increase their happiness short-term, but they consistently overestimate how much happier that’ll make them and they forget that they need different things to get real, long-term life satisfaction. Even when building long-term life satisfaction also benefits your short-term feelings of pleasure.
This eudaimonic form of happiness brings us a bunch of important benefits that we don’t get from chasing pleasure alone:
Narrative coherence. We look at our lives with satisfaction if we can build a nice narrative around it. Even when filled with constant sparks of pleasure, if it doesn’t all connect together we experience a lower life satisfaction.
Resilience. When difficult moments inevitably enter our lives, we obviously feel bad. But if we’ve only ever dealt with short-term pleasure and pain, we have no perspective that things will turn out well again later and we break down easily. Experiencing and overcoming challenges gives us long-term “proof” of our own strength, which makes us feel good about our lives.
Competence. Learning new skills is not always enjoyable in the moment, but having skills is genuinely satisfying. It also guards against learned helplessness, a state of semi-apathy where people never learned to care for or take responsibility of their own well-being, and are completely dependent on validation or help from their environment.
Social contribution. We’re deeply social beings and get a lot of satisfaction from having given things to the people around us. Even when people have a lot of social connection, if they don’t contribute they can feel lonely and isolated.
Physiological regulation. Because eudamonia requires short-term moments of stress, building long-term satisfaction improves our cortisol levels, reduces our levels of inflammation and improves our gene expression. Focusing on long-term life satisfaction literally improves your health.
So how do we build this “meaning”? Hedonia is simple: buy a new phone, eat great food, laugh with friends, have sex. But how does eudamonia get built? It turns out that this deep, meaningful life satisfaction has a lot to do with an absense of pleasure: long-term happiness is, in a way, caused by short-term unhappiness.
Facing adversity often forces us to reflect on what really matters to us. You can do this right now: imagine you’d get a call from your doctor who says they discovered some extremely rare disease in your blood and you only have today left to live. What would you do? Which plans would you immediately drop? What sting would you feel from some love you haven’t expressed or from some thing you haven’t made? This is just a thought experiment for an extreme situation, but we reflect similarly when we get fired, or go through a breakup, or even when we drop a plate in the kitchen that we inherited from our grandmother.
When we manage to heal from trauma, we usually do so by building a story where we interpret this traumatic experience as a useful or even necessary experience that has led to something else that we value. This could be that we have a better understanding of what’s important to us, or it could be our motivation to start a non-profit to help prevent other people from experiencing the same trauma.
Many times, after challenging times, we remember this quote from Big Panda and Tiny Dragon:
“Which is more important,” asked Big Panda, “the journey or the destination?”
“The company,” said Tiny Dragon.
Indeed, positive relationships themselves are a key driver of life satisfaction.
This is called Post-Traumatic Growth and we see it happen over and over again---but there’s a limit. As you might imagine, “the worse your life gets, the better your life gets” doesn’t make a lot of sense. And as we’ve already seen, a complete lack of challenge doesn’t do us wonders either. That means there’s an optimal level of difficulty where we grow, learn, build our sense of self, become confident about our own strength and just quite not completely break down in sadness.
All together, there are four elements that matter to how well we integrate our challenges:
- Severity and chronicity. Not too bad, not too much.
- Perceived controllability. Do we feel like we have at least some control over the outcome? This is a combination of the stressor itself, but also our own self-belief and locus of control.
- Social support. Are we alone in this or are there people around us who can help process and work through this with us?
- Processing style. Reflection and integration are useful, but rumination not so much. What matters most is whether or not we end up with a meaningful narrative of what happened to us.
The interesting thing here is that the same technology that so easily takes away life satisfaction by stripping us from challenge, can be tweaked to actually support us there too.
Social support is something that’s technologically very much possible with the social media that already exists for a long time. Companies like Meta have already shown in their research that they can have a pretty good grasp of how someone’s feeling based on their behavior in their apps, and since people’s friends are there, there’s plenty of possibilities.
Imagine that you’re in some rabbit hole and consuming a lot of short-form video. The app might suggest that you talk with a dear friend, or you could even have configured “dear friends” beforehand and they would get a notification saying you might need someone to check in on you.
Facebook and Instagram have a lot of opportunity to help support real-life social connection, but they will have to do it at the expense of time spent in the application and, thus, revenue from advertising. When people are having a great conversation in the comments or even instant messaging, the app might suggest they meet up in real life or call. Facebook could figure out your closest friends and automatically make a group chat for them so they can plan a surprise birthday party for you. Instead of just measuring the time you spend in the app, they could actually stop you and make you go take a walk outside. They could use their all-powerful algorithm to see which people need help and maybe feed them more hopeful content.
In the processing of our challenging moments, language models can be an excellent ally. If they understand how to bring people out of rumination, how to help them find meaningful perspectives and narratives that help them make sense of what happened to them, if they can help people to see they have more control over the situation than they might think---these are all incredibly useful things that can help us grow. Now that you know how to work through challenge you know how to prompt your AI to help you in the right way, but the companies in charge of designing the character of these AI assistants can have a large positive impact as well.
Then comes the more philosophical and controversial part: tweaking severity and chronicity of hardship…on purpose. If we imagine a superintelligence, it’s not too far off to imagine that this intelligence could be very aware of what impacts our lives, positively and negatively, over what time frame. It could, theoretically, tweak each person’s day-to-day challenge level in a way that is right on that optimal level of challenge so they grow and become more happy. It could tell you that today you’ll have to weed the corn field by hand between all the machines that are way faster and better than you, because you’ve been having a life that’s too easy for the past few weeks. It would make your partner cheat on you because it knows that you’ll get over it and become a more genuinely happy person afterwards.
Why does this sound so dystopian? Why does that make us recoil when we hear it?
I’ve had two brain surgeries in my life. Recovering after the second one was much, much easier compared to the first one. Partly because it was a “smaller” surgery, but for the most part because I knew what to expect. The first time, I didn’t know if the pain I was feeling was good or bad, what amount of pain I should be tolerating, when it should be stopping, and so forth. Going through that process of uncertainty, going to the hospital out of fear that I wasn’t healing properly, seeing it was fine---that all contributed to the growth I experienced and contributed to my eudaimonic well-being. After the second surgery, I knew exactly what to expect. I knew what the hurt would be like, I knew when it would go away, I knew I didn’t have to worry too much about it. That recovery was a lot more peaceful but it also didn’t make me go through a personal growth phase in the same way as that first surgery.
So it looks like a key ingredient for “growth through adversity” is in fact that we don’t know. If we know exactly how long the pain will last and how much pain there will be, we don’t have to struggle through it as much and we won’t integrate a great lesson afterwards. In the same way, I think that if you’d receive a text message from the all-knowing AI that your partner will cheat on you today because you need the growth, you’d just blame the AI. You won’t have to go through the same agonizing pain of trying to answer the question of “why did they leave me” and get to the conclusion that you’ll never find out and have to live the rest of your life regardless.
Couldn’t we still choose our adversity somehow? Many of us (like me) do genuinely enjoy the struggle of pushing hard weights around in the gym for the satisfaction of having a fit, functional and beautiful body. It’s real hard work and it produces real meaning. But honestly it’s not the same level, is it? There’s a very different texture to the meaning you derive from having survived a near-death experience you didn’t ask for and surviving a dangerous bungee jump where you knew all the risks beforehand.
This gets us to the odd conclusion that probably, an artificial superintelligence that’s truly benevolent towards humans will just leave us. They might leave a mini-version of themselves to help us get some basic necessities for everyone but apart from that, the superintelligence will realize that they shouldn’t give us everything we want for our own sake and they can’t deliberately make things difficult for us either---because we’ll know it’s them and it won’t mean as much anymore.
But maybe, just maybe, this superintelligence doesn’t actually leave. Maybe they hide, and they do their benevolent activity either way. Maybe they give us good things but also bad things in life, in exactly the proportion and exactly at the times we need them to maximally grow our happiness. It’s just that we can’t know if they did it deliberately or not, because if we knew for sure, the meaning would be lost.
This is the point in my reasoning where I had to suddenly stop. Looking at that last paragraph, it seems like I’m describing…God?
It turns out that the ideal machine we would build for maximizing human life satisfaction, would look exactly like every god in every religion on the planet? How did our engineering approach to build the machine that optimizes human happiness end up creating God? It makes sense, in a way: religion has always been one of humanity’s best ways for giving meaning to their lives. Our ancestors survived because they found peace and meaning in assuming that their hardship was caused by a benevolent god that needed them to suffer somehow, and that made them motivated and pull through those rough times they all encountered. But our ancestors also literally faced real hardship---their lives were at stake. The fact that all the way on the other side of the spectrum, where everything can be taken care of for us, we seem to find the exact same thing is…bewildering. It’s like it doesn’t really matter if this “happiness technology” does anything at all for us. Religion worked to give people meaning before anything else existed, and somehow it ends up being the ultimate technology when we have everything, too.
The forge of meaning is a complex beast full of nuances. If we want true, deep life satisfaction, we’ll have to find a way to still have challenges we can’t control. AI can help, but it can’t do it alone.
Motivation
Many of the things that give us the most happiness and the biggest increases in the quality of our lives—a successful relationship with a long-term partner, building a great business, having children—take a lot of effort over a long period of time that does not immediately pay off. Luckily, we’ve evolved excellent systems that help motivate us to do these things. Let’s dive into our two big neurotransmitters that give us the drive to do things: dopamine and serotonin.
Why would you do anything that doesn’t give immediate pleasure? How would you know what to do? That’s what dopamine helps us do. It’s the signal that says “This. Do this.” If I would squirt some dopamine in your brain right now, you’d feel a pull towards doing more of what’s on your mind (based on your memory, attention, goals, etc.). It’s kind of like raw, undirected, drive. The natural dopamine system is usually described as encoding reward prediction error, it reacts to whether something ended up being better, just as good, or worse than expected. With memory, the dopamine starts firing earlier because it remembers the good things that come after: for example if you have to drag yourself to the gym for the first time and it ended up being extremely fun, that gives you a big dopamine spike and the next time you see you gym bag you’d get a small hit as well.
Dopamine doesn’t make you happy or feel good, it’s just as much involved in compulsive behavior we dislike like gambling, doomscrolling and watching porn. It just makes you do things, and it does it based on how “pleasantly surprised” you were the previous times you did that same thing.
Then how would you possibly have the patience to wait for these rewards when they might only come years later? That’s where serotonin shows up. (It’s a complex neurochemical but this seems to be at least one of its functions.) People often mistake serotonin for “the happiness chemical” but it more of a patience thing. When serotonin is high, we’re able to control our impulses, we can wait more patiently, and we feel relatively at peace even when the thing we really want isn’t there yet. We have more serotonin when the probability of reward is high but the timing is uncertain.
Comfortable Sadness
Post-Traumatic Growth
Those moments when we’re sitting on the floor, crying our hearts out or screaming into a pillow, binging ice cream with tears melting the ice cream bucket—they’re some of the most important sources of meaning and eventually happiness in our lives. It doesn’t feel like it in the moment, but these are the exact moments we’re re-wiring our minds in the ways that matter the most for us in the long term.
When we go through traumatic moments in our lives—near-death or death, breakup, boundary violations—we luckily heal most of the time. That’s resilience, the ability to spring back to feeling good about yourself and your life. When we’re really lucky, we don’t just recover to baseline but we actively grow from these events. That’s Post-Traumatic Growth and understanding this deeply will not only help you deal with your future traumatic events, but also help you position artificial intelligence’s role in your life, especially as it gets more and more powerful.
It has always struck me that the most important pieces of wisdom are so simple and everyone knows them, yet we don’t live them. Then, when we experience some of this post-traumatic growth, we feel like we have a revelation: oh, bad things pass! Oh, I don’t have to stay in a relationship with someone who doesn’t respect me! Oh, I don’t need to be afraid of speaking my truth! It seems like we’re all destined to learn these lessons from experience. (I think this is one of the reasons stories are so important—our empathy when listening to stories allows us to live that same event and hopefully catch the same revelation without having to go through the same event ourselves.)
Post-Traumatic Growth doesn’t happen during trauma—it happens during the processing of the trauma afterwards. The literature has defined four steps which I’m sure you’ll recognize:
- Some core assumptions or worldviews get shattered. For example, we experience that death is real, or that some people are truly bad people.
- We experience intrusive rumination. At the start, we can’t help it but ruminate about this over and over and over. Later on, we might write in our journal and transition to more deliberate reflective practice where we hopefully make some meaning and give ourselves some grace.
- We open up. We stop keeping it to ourselves, we talk about it with friends, and we have them participate in our healing process.
- We re-construct the narrative. We find a way to remember this as a story in a way that yields new possibilities, values and identity change.
Once we’ve emerged from the depths of our processing and rumination, we awaken with a partly-rebuilt identity in one or more of the five domains (Tedeschi & Calhoun, 1996):
- Relating to Others (e.g., increased closeness, compassion, willingness to express emotions)
- New Possibilities (e.g., new interests, new life paths)
- Personal Strength (e.g., feeling stronger, self‑reliant)
- Spiritual Change (originally religious‑focused)
- Appreciation of Life (e.g., changed priorities, valuing each day)
In research, they test for your post-traumatic growth with questionnaires. Here’s the PTGI‑X‑SF, the short version of the most up-to-date survey used in research. Feel free to give it a try with one of your recent challenging moments.
Roughly half of people report moderate-to-high Post-Traumatic Growth, depending a lot on the type and severity of the episode. It seems like there’s an inverted U-curve to it: if you encountered a minor setback, it didn’t trigger you enough to grow meaningfully from it. If you experienced something truly horrible, it’s just…trauma. Sometimes, it takes months or years for the growth to appear. Sometimes, the growth comes together with PTSD, depression or anxiety. Sometimes, we think we’ve grown but we haven’t quite and we’re deluding ourselves—there’s quite certainly a difference between people’s reported PTG and their actual growth—I can tell you from experience that I’ll happily share “the big lessons” I’ve learned from the setbacks in my life, only to later realize I hadn’t really learned them at all yet. The research isn’t out on that yet.
Growing without trauma
This is the point where I’d like to drop the word “trauma.” Real trauma is heavy, terrible and it’s morally wrong to wish it in any situation. It’s also a word that in pop psychology culture has broadened so much in meaning that everyone could say they experienced trauma of some sort. The fact that we can grow from trauma is a lucky coincidence—nobody has done a double-blind lifestyle study yet where they injected half of the people’s lives with near-death experiences just to check if they end up happier, and nobody should be doing that study.
But we also know there are plenty of “positive” opportunities for growth. We push ourselves hard in the gym to discover we’re stronger than we think. We train for and succeed in running a marathon. We go on a wilderness survival trip. We jump in the sea when it’s ice cold. We go on psychedelic trips. Growth after positive events has been studied and it’s clear that yes, pain without gain is possible. You could call this “post-ecstatic growth” and events that invoked inspiration or meaning were particularly helpful. People reported growth in four domains: deeper spirituality, increased meaning and purpose in life, improved relationships, and greater self-esteem.
But there’s more nuance to this.
First, we know there’s an inverted U-shape connection between lifetime adverse events and overall life satisfaction. This means: experiencing some amount of life adversity leads to a greater sense of life satisfaction than none at all (or too much). These people also showed lower global distress, less difficulty in daily life from their emotions, and less post-traumatic stress symptoms.
Second, when you look at the meaning people derive from positive challenge compared to negative challenge, they don’t exactly overlap. They seem to grow different muscles. Facing real adversity forces you to break down some of your worldview or beliefs, sometimes aggressively, making you re-write some of the narrative of your life and how you integrate everything. You learn things about fragility, limits, what’s most valuable and consequences and this is a key pillar for meaning in our lives.
This tells us that yes, you can significantly improve your life satisfaction with deliberate challenge. Go do sports, learn art, become an entrepreneur, play games, explore the wilderness—you’ll grow, you’ll believe in yourself more, you’ll gain a deeper sense of happiness in life. But we need to remember there’s something about the uncontrollable nature of adverse events too. Failure, uncertainty, rejection, vulnerability, leadership pressure, being humbled by reality—these are uncomfortable but they give us gifts that we couldn’t have gained through any other way.
Sacred Struggle
The Echo of Collective Drift
Exponentials & The Loneliness Epidemic
Relationships Redefined
Towards Resilient Joy
Part 3: Reclaiming the Flame
Personal Armor Against the Tide
Cultivating Discipline
Everyday Rituals
Embracing the Unknown
Designing Aligned Allies
Inform people
I do think we need government-level AI because we don’t want the attention / usage incentive
Build human-aligned AI’s
Arbiters of Presence
Every sci-fi movie had flying cars but none of them had people walking across the street video-calling relatives across the world or herds or people pointing their phone at an empty spot in the park to catch imagined animals. The future has its ways of surprising us.
It wouldn’t surprise me if we figured out solutions for our big AI worries—total human annihilation and replacement of all work. Maybe, exactly like social media has done to us, the real loss is that we get completely engulfed by the mental stimulus AI gives us. Infinite interesting knowledge, emotional connection, work performed, video generated…that’s appealing.
This presents a real danger to our well-being. A significant part of our life satisfaction comes from having achieved meaningful, long-term, difficult things, so our biology is excellently adapted to motivate us to do these things, even if the actual payoff is well into the future.
Meaning Guidance
When they discovered my brain tumor, the medical team kept me quite unaware of the gravity of the whole situation. I felt relieved more than anything—finally, we found why I have headaches—and the surgeon gave me a lot of confidence in that it could be removed without major long-term consequences. I remember scrolling through the MRI images trying to find the “speck” they found without any result. The day before the surgery, the surgeon had to interrupt me at some point to say “no Marcel, this is actually a big one,” gesturing with his hands. It turned out I couldn’t find the speck because it was so large I mistook it for a regular part of the brain. Then, only a week or so after surgery, I read an article in the newspaper describing how the neurosurgeon had to argue with other surgeons for space on intensive care for me because “if he didn’t operate now, this young man can end up in a coma.” That was the moment the real gravity of the situation hit me, and if I’d never known this, the meaning I created from this entire moment of my life would have been entirely different.
Meaning in our lives is something we consciously create or “find” more than anything because it’s so strongly tied to our own narrative explanation of our lives. We’re all in a sense trying to turn our lives into a good book, and that gives us a contentment as we keep on living. This is why adverse moments force us into rumination, why the reflective practice of journaling has such a positive effect on life satisfaction, and one of the main mechanisms therapy helps us through difficult periods.
Artificial Intelligence systems can be an excellent “mini” form of this. Especially if they understand the context and events of your life, a language model can prompt you with questions to think through in your journal, it can help you notice those thoughts and it can nudge you to reflect on and integrate the events of your life. If I would have been on my own with that brain tumor I might have felt relatively chill about it, but the reflection of other people saying “do you realize what you went through?” actually helped me extract more meaning from that. I don’t see any problem with honestly reflecting on our successes in life and making sure we understand what we achieved, and also take the next step of integrating this in our identity: “yes, I’m the kind of person that did this.”
Sometimes, all you need is a small reflective nudge that shows you something incredible about yourself you hadn’t even noticed before.