The process of integrating AI into how an organization works, decides, and collaborates. AI adoption is not a technology project. It is a change in how people think and exercise judgment. Organizations that treat it as a rollout rather than a transition gain efficiency while quietly losing capability.
Glossary
A working vocabulary
The way we talk about AI shapes how we use it. These terms come from Suon’s work with teams navigating AI adoption. They name the patterns that matter most: where judgment holds, where it slips, and what organizations can do about it.
Acute cognitive overload from the excessive use of, interaction with, or oversight of AI tools beyond a person’s cognitive capacity. The term was identified in a 2026 BCG and UC Riverside study of nearly 1,500 workers, published in Harvard Business Review. The counterintuitive finding: using AI to automate routine tasks reduced burnout, but the oversight work increased cognitive strain, with 39% more major errors among those affected. AI brain fry is distinct from general AI fatigue, the broader weariness of keeping pace with constant AI change. Brain fry is acute. It comes from the vigilance required to evaluate outputs, switch between tools, and catch what the model gets wrong. The machine does the work. The human absorbs the cognitive cost of making sure it did it right.
A pattern of reliance on AI that gradually degrades human judgment and independent problem-solving. AI dependency rarely looks like a crisis. It looks like convenience. People stop checking outputs. They stop forming their own views before consulting the model. Over time, the skills they once brought to the work atrophy, not because they chose to abandon them, but because nothing in their workflow required using them.
Deliberate spaces in an organization where AI is excluded because the struggle itself builds capability. Not every task should be augmented. Some decisions are worth protecting precisely because the difficulty of making them develops judgment. AI-free zones are a governance tool: they draw a line between tasks where AI adds value and tasks where removing friction removes the learning.
Literacy is knowing what AI can do. Fluency is knowing when it should. Most training programs stop at literacy: tool features, prompting techniques, use cases. Fluency goes further. It is the judgment to recognize when AI helps and when it substitutes for your own thinking. A literate user can operate the tool. A fluent practitioner knows when to put it down.
The ability to use AI effectively while maintaining and developing independent judgment. AI mastery is not about becoming a power user. It is about knowing when the tool serves you and when it substitutes for you. Mastery means your thinking gets sharper with AI, not thinner. It requires deliberate practice, self-awareness, and a willingness to do things the slow way when the slow way is the right way.
Low-quality content produced in volume by generative AI and distributed without meaningful human oversight. The term entered common usage in mid-2024, when the technologist @deepfates observed that “slop” was becoming to unwanted AI content what “spam” became to unwanted email. The New York Times helped establish it in mainstream discourse shortly after. Merriam-Webster named it the 2025 Word of the Year. Researchers identify three hallmarks: superficial competence (it looks skilled but lacks substance), asymmetric effort (it costs almost nothing to produce), and mass producibility (it is designed for broad distribution). Slop matters for organizations because it degrades the information environment everyone depends on. When AI-generated material floods search results, social feeds, and internal knowledge bases, the cost of verifying what is trustworthy rises for everyone.
Deliberate practice in sustaining focus against the fragmentation that AI workflows create. AI tools constantly interrupt the thinking process: prompting, evaluating, refining, re-prompting. Each interruption is small. Across a workday, they erode the deep concentration that complex judgment requires. Attention training draws on contemplative traditions, from Buddhist meditation to monastic stabilitas, that have cultivated sustained focus for centuries. Applied to AI work, it means staying with a problem long enough for your own thinking to emerge before reaching for the model.
The suspicion that accepting help from AI diminishes the value of what you produce. The feeling is familiar to anyone who has wondered whether their work is still truly theirs after a model contributed to it. It predates AI by centuries. Montaigne built his entire method on other people’s ideas and never considered it a failure of originality. AI did not create this anxiety. It made it harder to avoid.
The brain’s evolved tendency to minimize mental effort, which AI amplifies by making intellectual shortcuts feel costless. Humans instinctively gravitate toward the easiest path, especially when promised similar outcomes. AI supercharges this. When a model can produce a passable draft in seconds, the effort required to think independently feels irrational. The result is not laziness. It is biology meeting infrastructure. Recognizing the cognitive miser in yourself is what makes deliberate effort possible.
Delegating thinking tasks to AI in ways that erode the thinker’s own capability over time. Not all delegation is offloading. Using AI to handle routine formatting is sensible. Using AI to form your argument, draft your strategy, or evaluate your options is a different matter. The distinction is whether you are saving time on tasks you have already mastered or skipping the thinking that builds mastery in the first place.
Accepting AI-generated answers without evaluating them, even when they contradict your own judgment. The term comes from a 2026 Wharton study by Shaw and Nave, who found that participants followed incorrect ChatGPT advice roughly 80% of the time. Cognitive surrender is what happens when the cognitive miser meets a tool that makes thinking feel optional. It is not stupidity. It is the path of least resistance, made frictionless. The disposition (cognitive miser) drives the behaviour (cognitive offloading), and the outcome is surrender: judgment deferred so completely that the quality of the AI’s answer stops mattering.
The growing mismatch between the power of AI tools and the human development required to use them wisely. AI alignment is the technical problem. The cultivation gap is the human one. Organizations invest heavily in AI infrastructure while investing almost nothing in the judgment, self-awareness, and collaborative capacity of the people using it. The result: increasingly powerful tools operated by people who are not growing at the same rate. Closing the cultivation gap means treating human development as seriously as model development.
The gradual expansion of tasks handed to AI, from the routine to the consequential, without a conscious decision. Delegation creep starts sensibly. You let AI handle email drafts, then meeting summaries, then first passes on strategy documents. Each step feels like a small efficiency gain. But the cumulative effect is a shift in who is actually doing the thinking. By the time you notice, the boundary between “AI assists me” and “AI works while I supervise” has moved further than you intended.
A challenge that feels inefficient in the moment but builds lasting capability. Cognitive psychologists Robert and Elizabeth Bjork showed that making learning harder in the short term leads to stronger retention and transfer. AI inverts this principle. It removes friction from nearly every knowledge task, which feels productive but quietly undermines the struggle that builds expertise. In AI work, desirable difficulty means deliberately choosing the harder path when the task is one that develops your judgment: drafting before prompting, evaluating before accepting, thinking before delegating.
The status of being recognized as a reliable and trustworthy source of knowledge on a given subject. The concept comes from epistemology. Philosopher Linda Zagzebski argued in her foundational 2012 work that much of what we know depends on trusting the judgment of others, which makes the question of whom to trust a central intellectual problem. AI sharpens this problem considerably. When any system can produce polished, apparently expert content at near-zero cost, the surface signals of credibility lose their filtering power. What remains difficult to manufacture: a track record of being right and honest about being wrong, demonstrated reasoning rather than borrowed conclusions, and accountability for what you claim. Epistemic authority is what separates signal from noise in an information-saturated environment. It is built through demonstrated expertise, intellectual honesty, and sustained integrity. These are the qualities that cannot be automated, faked at scale, or commoditized.
The gap between what you know you want and your ability to articulate it to an AI system. Working with AI forces a new kind of self-knowledge. You must make explicit what has always been implicit: your standards, your reasoning, your sense of what “good” looks like. Many people discover, often uncomfortably, that they cannot clearly express goals they have pursued for years. The gulf of envisioning is not a prompting problem. It is a thinking problem that AI makes visible for the first time.
An approach to AI adoption that puts human capability, judgment, and wellbeing ahead of speed or output. Humanising AI does not mean making AI more human. It means making AI adoption more humane. It asks: what happens to the people using these tools? Are they growing or shrinking? Are they making better decisions or just faster ones? Organizations that humanise their AI adoption protect the capacities that no model can replace.
The ability to think about your own thinking, and a defining skill in AI work. Using AI effectively requires you to articulate goals you may never have stated, analyze processes you usually run on instinct, and evaluate results against standards you have never made explicit. People with strong metacognitive skills get better AI outputs because they can specify what they actually want. More importantly, they notice when AI is subtly reshaping their thinking. Without metacognition, you cannot tell the difference between AI enhancing your judgment and AI replacing it.
Keats’s concept. The capacity to remain in uncertainties without reaching for premature resolution. The most mature relationship with AI holds contradictory truths (it helps AND it erodes) without collapsing them.
A judgment-free zone where experimentation becomes possible because no one is evaluating. The psychoanalyst Donald Winnicott described potential space as the condition for play and creativity: a child plays freely when no one is watching. AI creates something like this, almost by accident. It is one to one, private, with no reputation at stake. Once the fear of judgment dissolves, ideas that stayed abstract for months begin to take shape. The creativity that emerges is not the model’s. It is what surfaces when a person starts experimenting without fear of failure.
A cycle of increasingly specific AI prompts that produces diminishing returns while draining time and attention. The prompting spiral begins when an AI output is close but not quite right. You refine the prompt. The next output is different but still not right. You refine again. Each iteration feels like progress because the output keeps changing. But the underlying problem is rarely the prompt. It is that you have not clarified what you want independently of the model. The spiral breaks when you stop prompting and start thinking.
Illich’s concept. When a tool becomes so dominant that alternatives are no longer merely unused but unimaginable. AI is approaching this threshold in knowledge work.
Unofficial, unmanaged AI use by individuals and teams across an organization. Most companies discover more of it than they expect: people using AI tools in ways that were never formally adopted, often with sensitive data. Shadow AI is not a problem to eliminate. It is information about where your people already see value, and where risk is accumulating without oversight.
The compulsive pursuit of the perfect AI output, driven by the illusion that one more refinement will finally produce it. Named after the Greek figure condemned to reach for fruit that always recedes. AI systems generate slightly different responses each time, so it feels like the ideal result is always one prompt away. The Tantalus effect explains the peculiar exhaustion that follows long AI sessions: you have been working hard, but not on your own thinking. You have been chasing a moving target set by the model.
An extension of Kahneman’s dual-process theory that adds artificial cognition as a third system alongside intuition (System 1) and deliberation (System 2). Proposed by Wharton researchers Nave and Shaw (2026), the model argues that AI does not simply augment one of the existing systems. It introduces a qualitatively different mode of thinking: one that is fast like intuition but structured like deliberation, and that changes how people engage both of the others. The implication for organizations is that AI does not slot neatly into existing cognitive workflows. It reshapes them. Understanding this helps explain why people default to AI outputs even when their own judgment would serve them better.
The practice of knowing when to rely on AI output and when to override it. Trust calibration is a skill, not a setting. It requires understanding what the model is good at, where it tends to fail, and how your own biases interact with its outputs. People who overtrust AI accept errors they would have caught. People who undertrust it waste effort redoing reliable work. Calibration sits in the middle: a learned, context-sensitive judgment call.
The phenomenon where work becomes harder not because the hours get longer, but because each hour gets denser. More tasks, more decisions, more cognitive demands packed into the same time. The concept has been studied in labor research for decades (known in German as Arbeitsverdichtung) and consistently ranks among the most reported workplace stressors. AI accelerates it in a specific way: by automating routine tasks, it removes the low-demand work that once provided natural pacing and cognitive recovery. The time saved is not returned to the worker. It is filled with more high-demand work. Organizations celebrate the efficiency gain. The people inside them absorb the compression.
The organizational pattern where AI decisions are made in rooms that lack the knowledge to make them well. Boardrooms and IT departments choose tools, set policies, and define workflows. The practitioners who understand the actual work, and where AI could genuinely help or harm it, sit three floors down and are rarely consulted. The wrong room problem explains why many AI investments underperform: the people with decision-making authority and the people with domain expertise are not in the same conversation. The fix is changing who is in the room, not what technology is on the table.