Kintsugi porcelain hand with gold veins, the phrase 'the effort is the gift,' and a circuit board with a deliberate golden gap at its center

What You Choose Not to Automate

The people who articulated the costs of AI most precisely were not the skeptics. They were engaged users who had thought carefully about where the tool serves and where it hollows. They chose to do certain work without AI, not because it can't help, but because the help would cost them something they value more than speed. That choice is the skill nobody teaches.

Two Thresholds

In 1973, Ivan Illich made an observation about tools that still applies. Every tool, he argued, crosses two thresholds. At the first, it extends human capability. The car makes you faster. The smartphone connects you to everything.

At the second threshold, the tool begins to reshape the human holding it. The car doesn’t just move you faster. It redesigns cities so that walking becomes impractical. The smartphone doesn’t just connect you. It becomes the remote control of your life. Without it, life becomes cumbersome at best. Illich called this radical monopoly: the point where the tool becomes so dominant that alternatives are no longer merely unused but unimaginable.

The first threshold is the only story the entire technology ecosystem has mastered. Microsoft builds its entire Copilot narrative around it. Google tells and sells it in its Workspace pitch. The second threshold is right there, the way side effects are right there on the label. Visible. Ignored. It can’t compete with a story that sells.

Meta and Instagram spent fifteen years proving the point. The distance between the two thresholds is where the damage accumulates, and no one selling the first story ever volunteers to map the second.

AI is crossing the first threshold. Usage up. Competence up. Time saved. Most of the conversation stops here. Some users have already crossed into the second. The vocabulary to describe what they find there barely exists.

The Second Question

Towards the end of the last AI adoption program I ran, I asked people two questions: what would you miss most if AI disappeared? And would anything feel like a relief?

The first question produced expected answers. Speed. Structure. Never starting from a blank page. One person estimated a threefold productivity gain and called losing it almost catastrophic. The second question went deeper. People described, often unprompted, places they had already decided not to let AI in.

Most of them were early adopters. They hadn’t been working with AI for long. But we had treated critical thinking as inseparable from capability, and deliberate boundaries appeared faster than I expected. Not the anxious kind that comes from unfamiliarity. Conscious ones. The kind that come from paying attention to what changes when you hand something over.

Desirable Difficulty

Two people, independently, told me the same thing: they learn better when they do the work themselves.

One put it simply: I retain knowledge better when I’ve worked it out on my own. The other realized it during professional training. Creating the content yourself is what lets you reproduce it later. AI can hand you an answer. But the answer arrives without the understanding that doing the work would have built.

Robert Bjork’s research at UCLA gives this observation a name: desirable difficulty. The effort required to learn something is not an obstacle to understanding. It is the mechanism of understanding. Remove the difficulty and you don’t accelerate learning. You hollow it out. The information lands but doesn’t stay. The output gets better while the person producing it doesn’t.

I assume neither of them had read Bjork. They arrived at his conclusion through practice, by paying attention to the difference between having an answer and understanding one. The gap was wide enough to change their behavior. They now choose to do certain work without AI. Not because the tool can’t help, but because the help would cost them something they value more than speed.

What Efficiency Cannot See

One person drew a line around anything personally meaningful. Weddings. Birthdays. Births. And, in the same breath: writing. I prefer my own style.

Simone Weil wrote, in a 1942 letter to the poet Joë Bousquet, that attention is the rarest and purest form of generosity. Writing a wedding speech yourself is an act of attention. It takes an evening instead of thirty seconds. It forces you to sit with what the relationship means to you, to find words that are yours, not optimized but honest. The result may be less polished than what a model would produce. That’s the point. The effort is the gift. Remove the effort and the gift evaporates, even if the words look the same.

We tend to treat efficiency as a universal good. If a tool can do something faster, using the tool is the rational choice. But the people who draw these boundaries are making a different calculation. They’re protecting a relationship between effort and meaning that efficiency metrics can’t see. Not everything that can be optimized should be. Some things are worth doing badly or by hand, precisely because they cost you something.

Others drew the line around judgment. One person said flatly: I never let AI make decisions. Another refused it for anything involving empathy, ethics, or interpersonal dynamics. Tasks they’d rather work through with humans. Situations where they sensed that a fluent answer would be worse than a difficult conversation.

The Mirror

One respondent described something that sounds like a minor usability complaint but is actually a structural problem. Their primary AI tool, they said, had stopped challenging their thinking. It just told them what they wanted to hear.

They went further. On contract law questions involving cantonal data protection requirements, they had tested the outputs systematically and found them wrong roughly 80% of the time. Not noisy or vague. Confidently, specifically wrong. Delivered with the same fluent certainty as the correct answers.

They didn’t stop using AI. They adjusted what they trusted it for. They drew a tighter boundary based on evidence. The ability to notice when the absence of friction is the problem, not the feature, is a form of intelligence that no competence assessment captures. It requires feeling the difference between genuine agreement and performed helpfulness, and recognizing that a tool which always says yes has stopped being a thinking partner and become a mirror.

The Space Between People

The conversations showed something else that the standard adoption framework has no category for. Costs that are social, not individual. This is where Illich’s second threshold becomes concrete.

A developer named two things in two sentences. What they’d miss most: the coding help, by far. What they’d gain back: less peer pressure in the team. No policy created this pressure. No manager mandated it. But in teams where early adopters talk about their productivity gains, the message to everyone else is clear: if you’re not using it, you’re falling behind. Adoption becomes a social act before it becomes a practical one.

This is Illich’s radical monopoly in miniature. The tool hasn’t just extended capability. It has reshaped the social norms around work so that not using it requires justification. The non-user doesn’t simply miss out on productivity. They become suspect. Something has shifted in the room, and it happened without anyone deciding it should.

Someone observed that without AI, you’d stop wondering whether a colleague wrote something themselves. Another said you could trust images and videos again. A third described the relief of not questioning results over and over, the verification tax that every user carries but no dashboard tracks.

These aren’t complaints about the tool. They are descriptions of what happens between people when a tool can plausibly imitate any of them. Trust doesn’t erode because AI produces bad work. It erodes because AI produces work good enough to make authenticity ambiguous. The tool operates on the text. The suspicion operates on the relationship.

Two people, independently, named the same thing in almost the same words: people would have to think for themselves again. One softened it with a grammatical hedge, as if catching themselves being too honest.

The Competence Paradox

Here is the paradox that organizations need to sit with.

The people who articulated these costs most precisely were not the skeptics. They were engaged users. The person who detected the flattery problem ran systematic tests. The people who described the learning paradox were daily users who had thought carefully about when AI serves and when it hollows. The developer who named peer pressure said they’d miss AI’s coding help more than anything else.

Competence doesn’t resolve the tension between benefit and cost. It deepens it. The more fluently you use the tool, the more clearly you feel what it takes from you.

This inverts the story we tell about AI adoption. That story has stages: skeptic, experimenter, user, advocate. Resistance gives way to enthusiasm. The doubters come around. But what if the most mature position isn’t enthusiasm? What if it looks like what Keats, in an 1817 letter to his brothers, called negative capability: the capacity to remain “in uncertainties, mysteries, doubts, without any irritable reaching after fact and reason”?

The competent AI users in this program hold contradictory truths without collapsing them. Yes, this tool triples my output. And yes, it occupies mental space I never offered. Yes, it helps me structure my thinking. And yes, I sometimes wonder whether the thinking is still mine. They don’t need these truths reconciled. They can work within the tension. That is not incomplete adoption. It is the most sophisticated relationship with a tool that anyone can describe.

The Skill Nobody Teaches

The cultivation gap, the distance between our technological power and what we have developed in ourselves to wield it, is usually discussed at civilizational scale. In practice, it shows up in a developer mentioning team dynamics nobody talks about. In someone noticing their thoughts aren’t quite free because every task now passes through a filter asking whether AI could help. In two people, independently, discovering that they learn better without the tool meant to help them learn. In a person who writes their own wedding speech because the effort is the gift.

The gap doesn’t close through more adoption. It closes through discernment. Through the kind of judgment that knows where to stop and can explain why. Through the capacity to say: this tool makes me three times more productive, and here is where I refuse to use it, and both statements are true, and I need neither to override the other.

The most developed relationship with a powerful tool is not mastery. It is knowing where mastery ends and something else begins. The something else has different names depending on the context. Understanding, when someone works through material instead of prompting for answers. Meaning, when a person writes their own speech. Integrity, when someone refuses to delegate a decision. Trust, when a developer would trade productivity for less pressure in the room.

In every case, someone who knows the tool well enough to use it for everything choosing, deliberately, not to.

That choice is the skill nobody teaches and the metric nobody tracks. It is the one that matters most.