Work in progress

Sacred Struggle

Finding meaning and fulfillment in a world where AI makes everything too easy.

Updated 1 hour ago
1 hour ago📝 writing
2 months ago📝 writing
4 months ago✏️ more book text
4 months ago🐣 initial commit

Overview

This book asks a simple but high-stakes question: when LLM-based chats become woven into everyday life, what do they do to our creativity, our intelligence, and our ability to relate to others? The central thesis—“Sacred Struggle”—holds that friction and challenge are not bugs but features of human flourishing: we learn deeper, create better, and feel more alive when we face difficulty on purpose. The book resists both doom and hype; it treats AI as a powerful scaffold that can either amplify human agency or erode it into homogenization, over-reliance, and learned helplessness. The aim is to translate evidence into design principles for AI, personal practices for users, and social norms that keep the human in the loop—so the technology strengthens taste, judgment, and connection rather than numbing them.

For research and writing support, I’m looking for rigorous, recent, and citable work (ideally RCTs, field experiments, meta-analyses, high-quality qualitative studies) on: productivity and quality shifts; originality/novelty measures; learning depth, transfer and calibration; collaboration and relationship effects; attention/affect (addiction, distraction); and long-run skill development. I want both failure modes (homogenization, over-trust, mode collapse in thinking) and countermeasures (deliberate difficulty, sampling diverse outputs, active-recall/reflective prompts, constraint-based workflows, light “Dopamine Nutrition Label” ideas). The end product should convert findings into actionable heuristics for designers, concrete use-protocols for knowledge workers, and cultural guidance for a post-AI world that prizes strong character, responsible decision-making, long-term ambitious projects, and disciplined taste.

Introduction

Story-based intro

Methodology, what to expect, for who is this

Part 1: The Digital Mirage

Decoding The Machine Mind

Concepts & Terminology

This world of "AI" is wrapped in hype, mystery and confusion. From charlatans slapping the word on every product they find to the researchers not even knowing why these systems end up seeming so smart, you'll find every opinion and perspective under the sun in a matter of seconds.

So I will add one more. I'll gradually build up the nomenclature from the broadest words up to what the more technical terminology is for the technologies we're talking about today.

Let's start with this word, Artificial Intelligence. When I took the course with that name in university I was expecting to learn how magic works. We didn't have ChatGPT yet but the groundwork for it existed: we already had some seemingly magical technologies for things like translation and image recognition. Alas, it turned out to be more about things like optimizing the logic for finding the best chess moves. However, I did learn a definition of AI that turned out to be the most useful one in today's day and age: AI is where a system /appears to be behaving in an intelligent way/. This definition is wildly abgiguous, perfectly reflecting this word. It doesn't say anything about how we know if something is intelligent or what kind of technology it should be. Maybe you say that when the lights turn on automatically as you walk into a room is intelligent---and I can't disagree with that.

Now that we've satisfied the salespeople and saddened the scientists, let's turn to some more precise terms. Machine Learning is an approach to programming in a sense, where you're not telling the computer what to do explicitly, but you're showing it examples and you let it figure out how to do it by itself. Most machine learning is "supervised" where we give it the correct input and output (when a person asks this, you should say that) and it can also be unsupervised when we don't even know the correct answer.

But most of the "magic" has come from a third type: reinforcement learning. In reinforcement learning, you usually let the machine do its thing for a bit and then you'd give it a score for how well it's doing. This is, for example, how robots learn to walk: you give the machine control over the motors and you score it, say, on how well it's still standing upright and if it's actually moving forwards. This is the methodology where we've seen in many different cases that the computer manages to come up, all by itself, with behaving in ways that are remarkably human-like.

In normal software, we write /code/ that describes how the machine should behave. In machine learning, we train models. The actual code, the "what to do" for machine learning systems is relatively simple: you just multiply and add a whole bunch of numbers (this "how to" is the machine learning /algorithm/, which needs a trained /model/ to be complete). These numbers, millions or even billions of parameters or weights, are what the model learns during its training phase: the traning process makes changes to the parameters through the examples the model "sees." Note that once this model is done with the training, these parameters don't change anymore. There is no inherent "continuous learning as it runs," to do this you have to re-train or do some other trickery.

You probably also encountered the terms neural network or deep learning---or even deep neural networks. These are roughly the same thing: a neural network is a specific structure of a machine learning algorithm, and a /deep/ neural network is just a bigger version of that. There are many kinds of machine learning algorithms; most are specific to the specific use case but neural networks are very general and can be used for pretty much anything, though the trade-off is that you need a lot more training for it to actually learn to do something useful.

All of the "headline systems" you see today are deep neural networks, usually using some variation of the transformer architecture with attention. The development of these architectures are what allowed these machine learning systems to deal with large amounts of data in one go (entire sentences instead of words, entire images instead of very small ones). Less technically, we're usually talking about Generative AI. This just means "it's an AI thing that /creates/ things"---because most machine learning is focused on, for example, classifying transactions or predicting which movie you'd like to watch next.

Most of our discussions today are about Large Language Models (LLM's). These are (very) large, deep neural networks that operate on language---that is, words and sentences. We'll dive into them next because some of the specifics of LLM's will help you work better with ChatGPT and the like, but before that I just want to mention one more class of Generative AI: Diffusion Models. A diffusion model slowly and iteratively removes noise. This is how image generation works (start off with random noise and let the AI "remove the noise"), but you can also interpret noise in movement (video generation) or even text (there are some LLM's being built as diffusion models).

How smart is AI?

We will forever be debating whether our Artificial Intelligence is "intelligent"---partly because we don't even know how to define intelligence to begin with. Over the last few years, we've seen large language models beat every test we gave them over and over again, but we keep finding "odd" behavior that feels to us humans like a very stupid thing to do.

These models are already making new scientific discoveries, and at the same time they're sometimes incapable of even honoring the request to "please don't repeat every question I ask you." Language models are now at the top of the mathematics olympiad and competitive programming, but ask them to write an article with wth a given amount of words and they'll fall apart immediately.

I think we humans have made our task of judging the intelligence of these machines very difficult because we've defined intelligence in an odd way. You can think of "human intelligence" as being comprised of two parts: one is the intelligence embedded in the way our internal plumbing and intuition works, the other is what we can do if we consciously think things through. There's a big discrepancy between humans in how good they are at mathematics or remembering things so we've called that intelligence, while forgetting that things like staying upright on a slippery surface or knowing how far to stand in line from the person in front of us (so it's not awkward) are incredible signs of intelligence.

The nature of machines is different from the nature of humans, so things that are easy for us might be difficult for them and vice versa.

Dealing with AI's unpredictable mistakes

AI doesn't think in the same way we humans do. Sometimes we're surprised by how well it thinks, sometimes we get frustrated by how it could be making those kinds of "stupid" mistakes.

What are the common types of mistakes large language models make, how could we be fooled by them, and what can we do to make sure we can reasonably trust what it says?

We really don't understand LLM's

The Seduction of Ease

Learning: Opportunity and Erosion

I'm always baffled when I think about the students of today and what an incredibly different experience they have studying. Writing suddenly became the easiest thing in the world and you have access to something that can explain anything, in any way, within a few seconds.

There's one catch, though. A friend of mine who went back to university is using ChatGPT to help find good analogies for the concepts he learns in law school and while they're always clear, they are plain wrong embarassingly often. Currently, large language models are a slightly risky sword but the benefit of 24/7 personal tutoring already far outweighs that. We've already seen enormous improvements in hallucinations so I expect this to be a non-issue in a few years.

So is it good that we have an infinitely patient all-knowing tutor in our pockets? My main strategy of studying mathematics was not looking at my course, but forcing myself to solve exam questions immediately. I vividly remember sitting on the floor for /hours/ trying to figure out how I could solve a problem. It was not the fastest way of absorbing the subject matter but it trained me at what is maybe even more important: the skill of retrieving and surfacing the knowledge that was already in my mind.

How We Learn

Every time we do or think something, a sequence of neurons in our brains fire. This means an electrical signal goes through the neurons, and between neurons there's communication via chemical signals: neurotransmitters. Along the long parts of these neurons, the axons, they are wrapped /myelin/, insulating cells that protect the electrical signal against interference. It is believed that one of the main ways that learning and skill acquisition work is through the buildup of more myelin along the neural pathways we use more often. All that to say: you get better at what you do more often.

But practically, just focusing on repetition will not help us in many cases. It might work for something like playing an instrument but not so much for grasping a new intellectual topic.

Work: The Debt of Productivity

Work is where we spend most of our waking hours and has some peculiar characteristics that make it especially AI-prone. In a way, /work/ has for the entire history of humanity been the main target of technological innovation. From the fire that we used to extract more nutrients from our food to robot drones inspecting our crops, automation has been at the core of working.

Since our society decided to interpret businesses as entities separate from humans, they can act in ways counter to what would be best for the people in them. Or rather, a business doesn't really care. People get fired, get assigned a job that depresses them (like moderating TikTok video) or just ruthlessly get swapped out for a robot dog.

But those examples are quite clearly "bad." What happens when we muddy the waters a little more---say by replacing the deeply satisfying work of a master sculptor with prefab moulds?

You see, for many of us our work /is/ our main source of fulfillment. We get to create something meaningful for this world and that gives us happiness and satisfaction. Being social animals, we've evolved with a deep-rooted need to give back to our community. But imagine you're Mother Evolution (I can already hear the biologists screaming you're not supposed to do that---bear with me, we're only indulging for a few seconds). How could you "program" your little humans so they understand what's meaningful to contribute? Well, it seems to be that our old brains created the rule that if it's hard to do it's valuable, and it will satisfy you if you do it.

This is a nice little shortcut that has served our species immensely for many thousands of years. But already today we're seeing the cracks in this unsatiable need for doing things that are difficult: /what happens when we're done?/ What when our own ambition did its job /so damn well/ that there's nothing really left to sweat, cry and bleed for?

This isn't a problem you can solve without consequences. We could keep the laser crop-weeding machine in the shed and pull out the weeds with our bare hands, but are we willing to suffer through having less food because of that? Not to mention how silly it would feel to be "working hard while the robots could do it better but our puny little human minds need to believe they're important."

Luckily, business sidesteps this problem by only caring about productivity and efficiency. But that introduces a new problem potentially exacerbating all of this even more: what if you find a new technology that produces at 80% of the quality of what you had before at only 1% of the cost? This has been the plight of our world since the dawn of industrialization. We can make houses, clothes and objects many times cheaper---they're just a little ugly and devoid of any 'soul.' Surrounded by this, we get reminded over and over that automation and machines built most of this at a staggering speed, much faster than we ourseleves could ever hope to do it. And as clearly evidenced, people generally prefer the cheap machine things.

TODO talk about depression and meaning numbers from studies

So looking out into our future we might see a continuation of this same trend: we can get more things for a lot less money---with the trade-off of being more "boring" and "simple." We'll talk more about this later because it's likely more complicated than that. But for now, let's look at what AI does to our work /today/.

Reclaiming the Grind

Creativity's Mirror-On-The-Wall

Where Does Creativity Come From?

Mathematical Creativity

The Creative Paradox

Part 2: The New Sun Casting Shadows on the Soul

When Machines Mimic Love

The Rise of Synthetic Bonds

Humans are so dang difficult to deal with! One minute they give us all this love and understanding---and the next they make a mistake, they hurt us, they misunderstand us, they cheat and lie...and then they expect us to forgive them? We don't know if they went too far or if we're the ones being too difficult, maybe they sound mean but it's just our own inner child being afraid of some imagined doom. Maybe they do want the best for us, or maybe not, maybe they're right, maybe not. All these questions that are impossible to answer...

Wouldn't it be nice if we could talk to someone who always said the right thing? Someone with infinite patience, who always understands us, who sounds nuanced enough for us to believe them but who still always agrees with us? We could train an AI so that it will only say things we like and prefer!

I hope you intuitively started cringing a little bit reading that last part. I think most of us intuitively understand that it's exactly the difficulties in dealing with other real, messy people that create unbreakable bonds between us, that teach us our biggest life lessons, that are actually the same things that cause our greatest moments of joy. "I learned to love our differences" has a very different energy to it than "we always seem to think the same thing."

The thing is, this is exactly how AI is trained: we show a human two responses to the same prompt and they pick the one they prefer. The whole thing is optimized for what you would like to hear right now, irrespective of the rest of your life and any sense of long-term happiness. And we're not stupid: you wouldn't /choose/ to spend more time with someone who consistently frustrates you, that would be silly. But because humans are so beautifully messy, once in a while they will end up hurting us. That's unpleasant to say the least, but more often than not, when we emerge from those tears and shouts we do so with a deeper understanding of ourselves, better social calibration, and appreciation for all the good things we do have in our life. Easy positive interactions are addictive, but when we lose the effort we also lose both growth and meaning.

So we will always be attracted to "someone" that says what we want to hear and all AI's are optimized for this (how else would you train it). When you undestand that, you immediately understand the meteoric rise of people having relationships with virtual chatbots. In a 2024 study (a long time ago in AI-time), 1 in 4 young adults believed AI has the potential to replace romantic relationships. The amount of people in such a relationship was much smaller---1% claimed to have an AI friend---but these are early days. I draw a parallel to compulsive social media and porn consumption: these are also optimized for short-term appeal and addiction, and pretty much everyone spends more time on "social" media than they wished they would.

Advice and Loneliness Loops

Biology's Unmet Hunger

Happiness Through Friction

The Forge of Meaning

Comfortable Sadness

Sacred Struggle

The Echo of Collective Drift

Exponentials & The Loneliness Epidemic

Relationships Redefined

Towards Resilient Joy

Part 3: Reclaiming the Flame

Personal Armor Against the Tide

Cultivating Discipline

Everyday Rituals

Embracing the Unknown

Designing Aligned Allies

Inform people

I do think we need government-level AI because we don't want the attention / usage incentive

Build human-aligned AI's

Alignment Research

Horizons of Human Flourishing

[[Post-Labor Economics]]

Post-Truth Mechanics

What will matter

Conclusion