Triptych: a warm brown ink drop blooming into raw linen paper on the left; the phrase Between knowing and saying set in serif italics in the center; an unfinished clay maquette of a human hand resting on a pale wooden workbench on the right.

Your Words Are the Interface

The interface with computers shifted from buttons to commands to code to natural language. For the first time, the barrier is not technical. It is communicative. What separates competent AI users from struggling ones is not better frameworks but the capacity to say what they mean. That capacity depends on self-knowledge, and no prompting technique can substitute for it.

The Interface Shift

Wittgenstein wrote that the limits of my language mean the limits of my world. He was describing the boundary conditions of thought, not computers. But his proposition from 1921 turns out to describe exactly where we stand.

For most of computing history, the barrier between you and the machine was technical. You needed to know which buttons to press. Which commands to type. Which code to write. Each transition changed what “being good with computers” meant. Each one selected for a narrower skill: mechanical at first, then syntactic, then logical. And each time, the people who couldn’t cross the barrier knew exactly what stopped them. The obstacle had a name. It was always some form of technical knowledge they lacked.

Then generative AI arrived, and the interface changed in a way none of the previous transitions had prepared anyone for. The primary way of working with a computer became natural language. You type what you want in the words you would use with a colleague. No syntax errors. No compiler to catch your mistakes before they run. No manual to consult when you get stuck. The barrier didn’t disappear. It moved somewhere harder to see.

Previous interfaces were constrained. They had rules, and following those rules was the entire skill. Natural language has no such constraint. It has only your ability to say what you see, want, and mean. The question “are you good with computers” used to be a question about the machine. Now it is a question about you.

A former workshop participant put this plainly. She had been watching colleagues struggle with AI. The ones who got stuck weren’t lacking technical knowledge. Most of them had more than enough. What they lacked was the ability to articulate what they actually wanted. They could feel the shape of a good result. They could recognize it once it appeared. But the step between knowing and saying, between an intuition and a sentence, was where everything broke down.

She works with structured prompting frameworks successfully. She is not making a case against frameworks. Her observation is about what sits beneath them: without the capacity to say what you mean, no framework helps. The structure gives you a container. But if you cannot fill it with precise language, the container is empty scaffolding. The prerequisite is not a better template. It is a clearer relationship with your own thinking.

This turns out to be harder than it sounds. Most of what we know lives below the threshold of language. We act on judgment we have never articulated. We hold standards we have never defined in words. We make decisions based on pattern recognition we cannot explain. None of this was a problem when the interface was a button, a command, or a line of code. Those interfaces didn’t need you to explain yourself. They needed you to follow their rules.

Now the interface asks what you actually mean. And for the first time in computing, what separates competent users from struggling ones is not technical fluency. It is self-knowledge. How well do you know what you think? How precisely can you name what you notice? Wittgenstein’s boundary still holds. What cannot be said cannot be thought clearly. What cannot be thought clearly cannot be communicated to anything, human or machine.

The bottleneck moved. It used to sit between you and the machine’s language. Now it sits between you and your own.

The Three Layers

Most people interact with AI in one of three ways. Not better or worse. Deeper or shallower. The difference matters because each layer draws on a different part of you.

The first is unstructured. You open a chat, type a question, take whatever comes back. The AI’s solution space is enormous, and nothing in your input narrows it. The output reads the way it does because the input gave the model almost nothing to work with. Generic in, generic out. This is not a criticism. For quick lookups or idle curiosity, it works fine. But when the task carries weight, when you need the output to reflect your judgment, the approach breaks down. You have given the machine no pattern to follow except the broadest one available.

The second layer is structured prompting. You define a role for the AI, state the task, provide context, specify constraints. Frameworks exist for this: RISEN, CO-STAR, dozens of variations. They work. They narrow the solution space considerably. Someone using a structured prompt will reliably get better results than someone typing a bare question, because the structure forces a degree of articulation that raw queries skip entirely.

A 2026 study by Liu and Yao in the International Journal of Human-Computer Interaction examined this layer closely. Across 211 participants, they found that prompting style was predicted less by technical skill than by how people mentally represented the AI itself. Users who viewed AI as a collaborative partner overwhelmingly favored co-creative prompting: 90% of them wrote prompts that invited exchange. Users who viewed AI as a tool tended toward directive instructions, with only 46% moving beyond rigid, one-shot commands. The mental model shaped the communication, not the reverse.

This is worth sitting with. The research suggests that structured prompting is not purely a technique. It is partly an expression of how you relate to the thing you are talking to. People who see a partner write differently than people who see a vending machine. The structure helps. But it remains mechanical. It organizes communication without deepening it. You can follow every framework perfectly and still produce output that sounds like anyone.

The third layer has no established name. Call it situational awareness. Instead of filling in a template, you describe the situation you are actually in. What you are trying to achieve and why it matters to you. What your intuitions are telling you. Where you feel uncertain. What kind of impact you want this work to have. The input becomes essayistic rather than formulaic. It draws on the full spectrum of your perception, not just the categories a framework provides.

This works for a specific reason. Generative AI is a pattern recognition and imitation engine. It does not understand your intent. It approximates it from the patterns you give it. A structured prompt gives it a clean but impersonal pattern. A situated description, one that carries your particular way of seeing, gives the model something closer to you. The richer and more honest the input, the more the output stays in the vicinity of what you actually meant. You are not instructing the machine more precisely. You are giving it a truer pattern to work from.

The gap between Layer 2 and Layer 3 is not a gap in technique. It is a gap in self-knowledge. Structured prompting asks you to organize information. Situational awareness asks you to know what you think, feel, and want clearly enough to put it into language. One requires method. The other requires the thing no method can substitute for: a relationship with your own mind that is honest enough to be useful.

Three Layers of AI Engagement What each layer asks of you
Layer 1
Unstructured
Type a question, take the answer
What it asks of you
Nothing. No direction given. The AI's solution space is wide open. Output is generic because input is generic.
Layer 2
Structured Prompting
Role, task, context, constraints
What it asks of you
Method. Frameworks organize your communication and narrow the solution space. Better results, but still mechanical. You can follow every template perfectly and still sound like anyone.
Layer 3
Situational Awareness
The full spectrum of perception
What it asks of you
Self-knowledge. Describe your actual situation: what you are trying to achieve, how you feel about it, what your intuitions say. The AI works with your pattern, not a template. The richer the input, the truer the output.

The Dialectic

Something happens when the AI responds that the input layers cannot account for. Whatever you put in, however precise or situated your description, the output is never a mirror of what you said. It is not what you meant, and it is not what the machine invented from nothing. It is somewhere else entirely.

The model diverges and converges in the same move. It fans out across a vast solution space and collapses into a single articulation, all at once. The result pushes you through a tunnel. You land somewhere you didn’t walk to. The output may sit at a higher level than where your thinking started. It may reframe what you were trying to say in terms you hadn’t considered. It may organize your own scattered intuitions into a shape you recognize as true but could not have built yourself. And you arrived there without crossing any of the ground between your starting point and where you now stand.

Hegel had a word for this movement. In Science of Logic, he described Aufhebung, a concept that contains its own contradiction. The standard English translation is “sublation,” which preserves the strangeness:

“To sublate has a twofold meaning in the language: on the one hand it means to preserve, to maintain, and equally it also means to cause to cease, to put an end to. Thus what is sublated is at the same time preserved; it has only lost its immediacy but is not on that account annihilated.”

This is what the AI output does to your original intention. It cancels what you said, because the output is not your words. It preserves what you meant, because the output would not exist without your input. And it elevates both into something neither of you produced alone. Your intention and the model’s pattern recognition meet, and what emerges belongs to neither alone. The immediacy of your first articulation is gone. But the substance has not been annihilated. It has been transformed.

The synthesis, though, does not complete itself. The output lands in front of you, and now you have to do something with it. You have to decide what survived the transformation and what got lost. What the model sharpened and what it quietly replaced. This is where the human completes the dialectic. And the way you complete it is not purely analytical. Sometimes you feel whether the output is right before you can explain why. You read a sentence the model produced and something in you says yes, that or no, not quite, and the judgment arrives whole, ahead of reasons. The completion is also perceptual.

There is a further possibility worth holding carefully. If Wittgenstein was right that the limits of language are the limits of the world, then this encounter could do something beyond producing a single better output. You meet articulations you did not have before. Phrasings that name something you knew but had never put into words. If you engage with them actively, if you sit with why a particular formulation works and make the underlying move yours, your repertoire grows. Your language expands, and with it, the boundary of what you can think and say next time. Your ceiling rises.

But this is a hope, not a certainty. The opposite is equally plausible. Passive consumption of richer language can create the illusion of expanded capacity while the actual muscle goes soft. You start recognizing better formulations when you see them. You can tell when something is well said. But you can no longer produce it yourself. The gap between taste and ability widens. The encounter becomes consumption, not development. Whether the dialectic expands you or hollows you depends on something no framework can guarantee: whether you do the work of making what you receive your own, or whether you simply take it and move on.

This is an honest tension, not a problem with a prescription. It sits at the center of every AI interaction, and resolving it prematurely in either direction would be a lie. What matters next is what the completion actually asks of you.

Resonance

Something happens when you read the output. Before analysis, before checking facts or comparing against your brief, something responds. A sense of fit or friction that arrives faster than reasons. Most people skip past it. They move straight to logical assessment: Is this accurate? Does it match the requirements? Did the model follow the constraints? That works. It is a reliable way to evaluate output, and for many tasks it is sufficient.

But there is another mode most people never consciously engage. It is worth trying, because it may open something that logic alone cannot reach.

Eugene Gendlin, the philosopher and psychotherapist, gave this experience a name in 1978. He called it a felt sense.

“A felt sense is not a mental experience but a physical one. Physical. A bodily awareness of a situation or person or event.”

You read what the AI produced and your body registers something before your mind has finished parsing. A tightness, or an ease. A sense that the words landed close to what you meant, or that they slid past it. The felt sense is not vague. It is precise in a way that precedes language. Gendlin observed that it always contains more than you can say about it. You can circle it with words, and something remains that the words do not capture. But the circling itself is productive. Each attempt to articulate what the felt sense holds brings more of it into view.

This is what happens in practice when you work well with AI. You prompt. You read. Something responds beneath the level of analysis. You sense what is off. You re-articulate. The AI responds again, and the cycle continues.

Gendlin built an entire practice around this cycle. He called it Focusing: attending to the felt sense, letting it speak, finding the words that make it shift. The iterative loop of working with AI is structurally the same movement. Prompt, receive, sense, re-articulate. The resemblance is not a metaphor. It is the same human capacity, applied to a new partner.

Merleau-Ponty saw something similar from a different angle. In 1945, he argued that speech does not transmit a pre-formed thought. Speech accomplishes the thought. The act of saying is the act of thinking, not a report filed after thinking has concluded.

“The spoken word is a genuine gesture, and it contains its meaning in the same way as the gesture contains it.”

When you re-articulate what was off in the AI’s output, you are not delivering a correction you already had in mind. You are thinking. The sentence you write to redirect the model is a new thought that did not exist before you wrote it. Each iteration through the loop is a genuine act of cognition, not a refinement of something you already knew but a discovery of something you did not know until you said it.

This is why the quality of AI collaboration has so little to do with prompting technique and so much to do with the capacity to notice. The felt sense is the instrument. It registers what logic has not yet reached. People who work with AI fluently are not people who write better prompts. They are people who can feel the gap between what they meant and what appeared, and who can turn that feeling into language precise enough to close it.

Not everyone will find this mode natural. Some will read this and recognize something they already do without naming it. Others may try it and find that logical assessment remains their primary instrument. Both are legitimate. But the capacity is there, and in my own practice it has become the thing that matters most. It is worth one honest experiment.

The Implication

If the real skill is perceiving your situation richly enough to describe it on the way in, and sensing whether the output fits on the way out, then AI fluency depends on self-knowledge more than technical knowledge.

The AI training industry’s tools and frameworks are not wrong. Structured prompting works. Layer 2 produces reliably better results than Layer 1. But it has a ceiling. The ceiling is you. Your capacity to perceive, articulate, and sense is the upper bound of what any interaction with AI can produce. No framework can exceed the awareness of the person inside it.

Hubert Dreyfus described this boundary in 1986 through his model of skill acquisition. The model traces five stages: novice, advanced beginner, competent, proficient, expert. The novice follows rules. The expert perceives and responds without deliberation. The progression from one to the other cannot be accelerated by teaching more rules. It requires developing perceptual capacity through accumulated practice.

“When things are proceeding normally, experts don’t solve problems and don’t make decisions; they do what normally works.”

The expert does not apply a framework. The expert sees the situation and acts. What separates the expert from the competent practitioner is not more knowledge but a different kind of knowing: one that has become so integrated it no longer requires conscious mediation. This is the skill that sits beneath AI fluency. It cannot be taught through workshops or prompt libraries. It develops through the patient, repetitive work of paying attention to your own thinking, noticing what you notice, and learning to say what you see.

The industry is building better scaffolding for Layer 2. The scaffolding helps. But it cannot build the perceiver standing inside.

The skill that matters most is the one you can only develop in yourself, and it was always there, waiting to be needed.