The Wrong Room
What a 40-year-old factory reveals about AI transformation
I keep finding myself in the wrong room. Not literally. But in almost every AI conversation, I watch it happen. The right titles gather. Decisions get made. And something is off: people debating curtain colors while the foundation shifts beneath them. A senior leader asked me recently: "What should our AI strategy be?" Reasonable question. Wrong one.
The Factory That Couldn't See Itself
Walk into an American factory in 1910. What you'd see was remarkable, though no one thought so. A massive steam engine in the basement. A vertical shaft extending upward through the building. On every floor, horizontal shafts along the ceilings, connected by leather belts to the machines below. This was the line shaft system. It powered the Industrial Revolution. It worked. It also dictated everything.
Machines couldn't follow workflow; they had to sit near the shafts. Buildings were multi-story because long shafts caused friction problems. Interiors stayed dark; the belts blocked light. They dripped oil. Exposed belting caused constant injuries. Some factories had more than a kilometer of shafting under one roof. The entire architecture of production was shaped by a constraint so pervasive it had become invisible. Then electricity arrived.
By 1900, electric motors were commercially available. Factory owners adopted them. Here's what they did: removed the steam engine, installed an electric motor in its place, kept everything else exactly the same. Same shafts. Same belts. Same layout. Same workflow. Cleaner power source. No transformation.
Manufacturers who made this substitution, historian Paul David noted, "were disappointed with the level of productivity gain." The problem wasn't electricity. It was imagination. They couldn't see what the technology made possible because they were still thinking in steam.
“The problem wasn’t electricity. It was imagination.”
The Insight That Took 20 Years
The breakthrough came gradually, then suddenly. Someone realized: attach a small electric motor directly to each machine, and you don't need shafts at all. No shafts meant no belts. No belts meant machines could go anywhere. And if machines could go anywhere, you could arrange them according to how work actually flows, not where power comes from. This was called "unit drive." Sounds simple. Revolutionary.
Factory floors could follow the natural sequence of production. Buildings no longer needed multiple stories. Overhead cranes became possible. Natural lighting improved. Machines could run only when needed. Equipment could fail without stopping the entire factory. Ford's River Rouge plant, completed in 1928, embodied the transformation: single-story buildings spanning 1.6 million square meters across 93 structures, organized entirely around material flow. Model T prices dropped from $850 to $260. Production: hundreds to thousands of cars daily.
The technology had existed for decades. The resistance wasn't technical. It was conceptual. Paul David, the Stanford economic historian who documented this transition, identified a "productivity paradox": transformative technologies often show disappointing returns for decades before their potential is realized. His 1990 paper, "The Dynamo and the Computer," drew an explicit parallel to computerization, arguing we were making the same mistake: grafting new technology onto old organizational structures.
The pattern held. Total factor productivity in U.S. manufacturing grew at more than 5% annually between 1919 and 1929, after a three-decade "productivity pause." Electricity accounted for roughly half of all productivity growth during the 1920s. It took 40 years from Edison's first power station to the productivity surge. Forty years to stop thinking in steam.
The Room Where It Happens
I see this pattern repeating with AI. But with a twist. In the electrification story, the bottleneck was conceptual: factory owners couldn't imagine redesigning around invisible constraints. With AI, there's an additional problem. Decisions happen in the wrong rooms entirely.
Here's how it unfolds: Leadership gathers. "What's our AI strategy?" Too abstract. They delegate. IT receives the mandate. "What platforms? What security protocols? What's the integration pathway?" Legitimate questions. Wrong ones.
Meanwhile, three floors down, someone in operations just used ChatGPT to solve a bottleneck that's existed for years. They won't report it. Not in anyone's KPIs. Leadership will never know. The decisions happen where expertise seems to live. The potential lives somewhere else.
“The decisions happen where expertise seems to live. The potential lives somewhere else.”
The Visible and the Invisible
This points at a structural asymmetry. Leadership optimizes for the visible: strategic initiatives, transformation programs, technology rollouts. Board presentations. Quarterly reviews. Workers navigate the invisible: the 47 clicks that should take 3. The workaround everyone uses but nobody documented. The meeting that exists because two systems don't talk. The report that takes 4 hours and nobody reads. Frontline workers know exactly where things break. They always have. What changed: now they have tools to fix it.
The factory owners who kept the shafts weren't stupid. They were rationally protecting sunk costs, avoiding disruption. But they couldn't see what the people on the floor saw: the architecture itself was the constraint. With AI, we're repeating the pattern. Strategic questions in boardrooms. Technical questions in IT. But the knowledge of where AI could actually help (the intimate familiarity with friction, workarounds, wasted time) lives with people who aren't in either room.
The Delegation Trap
There's a second layer. Leadership delegates AI to IT. Understandable category error. AI involves technology. IT handles technology. Clean logic. But IT frames problems in IT language: security, integration, platform selection, access control. Success means "it works," not "it changed how we operate."
IT can't see what they're not positioned to see. AI's value doesn't live in infrastructure. It lives in the combination of subject matter expertise plus AI capability plus daily friction. That combination exists in operational contexts IT seldom touches. Strange outcome: the organization has an "AI initiative," but the people who might unlock real value aren't part of it. The official tools don't match their actual problems.
The factory owners of 1905 fell into the same trap. They asked engineers to implement electricity. Engineers optimized within existing constraints. Nobody asked the workers navigating those constraints whether the constraints themselves should change.
Where Should AI Actually Live?
A hypothesis that may provoke: AI shouldn't be owned by IT. It should be owned by Business Development and Communication. Business Development thinks in opportunity, not risk. "What could we become?" Not "what might break?" They're trained to see potential before it's proven, to work across silos, to translate between strategy and execution.
Communication professionals are experts in the skill AI demands most: articulation. Clear thinking made visible. Intention translated into language. They've been doing prompt engineering for decades. They just called it briefing, messaging, stakeholder communication. AI's primary interface is language. Quality of output tracks clarity of input. The people who should own AI are the people already paid to think precisely in words.
This isn't just about skills. It's about positioning. BizDev and Comms sit close enough to operations to understand context, high enough to influence strategy. Natural bridges: exactly what's needed when the challenge isn't deploying a tool but discovering what becomes possible when the people closest to problems have new capabilities.
Power Redistribution
If AI belongs with BizDev and Comms rather than IT, the implications ripple outward. IT becomes infrastructure, not strategy. Essential, like the electricians who wired the new factories. But not the ones deciding how machines should be arranged. Leadership's role shifts from directing AI initiatives to enabling distributed problem-solving. Not "what should our AI strategy be?" but "how do we create conditions for the invisible to surface?" This is uncomfortable. Power redistributions always are.
The factory owners who finally embraced unit drive didn't just adopt a technology. They ceded control over factory layout, decisions engineers and architects had owned for decades. They had to trust that organizing around workflow rather than power transmission would work. They had to accept ongoing transformation, not a one-time project. Power redistributions don't happen because they make sense. They happen because someone names them, then builds coalitions around a different way of seeing.
What We're Missing
Here's what unsettles me. Most companies are not AI companies. They're logistics firms, agencies, healthcare providers, professional services, retailers. AI isn't their product. Most likely never will be. Asking them "What's your AI strategy?" is like asking a bakery "What's your electricity strategy?" The bakery doesn't have an electricity strategy. It has ovens to run and bread to make. Electricity is just how the ovens work now.
For most organizations, the opportunity isn't in AI strategy. It's in noticing where the shafts are. What constraints have become so familiar they're invisible? What workarounds have we normalized? What inefficiencies persist because the people who see them don't have permission, or tools, to fix them? AI didn't create these questions. It makes them urgent. The tools now exist for distributed problem-solving at a scale we've never had. The question is whether organizations unlock that potential, or keep the shafts while swapping the motor.
“Asking them “What’s your AI strategy?” is like asking a bakery “What’s your electricity strategy?” ”
The Timeline Problem
Paul David's research suggests it took 40 years for factories to realize the potential of electrification. Forty years from Edison's Pearl Street Station in 1882 to the productivity surge of the 1920s. We don't have that long.
AI development moves faster. The gap between early adopters and laggards widens faster. Organizations that move AI decisions to the right rooms, and redesign workflows rather than just adding tools, will compound advantages year over year. Their competitors will still be optimizing the shafts.
But here's what the historical parallel suggests: transformation won't come from better technology. It will come from better imagination.
The factories that thrived weren't the ones with the best motors. They were the ones that asked: If we could design this from scratch, knowing what electricity makes possible, what would we build?
The organizations that thrive with AI will ask the equivalent question, and have the courage to answer it.
““The question isn’t what your AI strategy should be. The question is whether you’re in the right room to find out.””
My Attempt
I don't know if the BizDev + Comms hypothesis is right. Provocation as much as prescription.
But this I'm confident about: the rooms where AI decisions happen today are not the rooms where AI value will be created. Leadership can't see the invisible friction. IT can't see the operational context. The people who have both aren't being asked.
Forty years ago, that mismatch meant decades of unrealized potential. Today, it means something more urgent: a widening gap between organizations that reimagine how work flows and organizations that simply replace the motor.
The question isn't what your AI strategy should be. The question is whether you're in the right room to find out.
The historical research in this piece draws primarily on Paul David's "The Dynamo and the Computer" (American Economic Review, 1990), Warren Devine's "From Shafts to Wires" (Journal of Economic History, 1983), and subsequent work by David and Gavin Wright on the productivity paradox of electrification.