The Smartest Person in the Room Is Slowing Down
Hey now.
One of my best friends is one of the most brilliant engineers I’ve ever met. Works at a major tech company. The kind of person who can run distributed systems architecture in his head in real time — race conditions, failure modes, scaling bottlenecks — all before anyone else in the room has finished reading the ticket. His in-head calculations are genuinely extraordinary. I’ve watched him solve in seconds what would take most engineers hours with a whiteboard.
And I think it’s becoming a liability.
Not because he’s wrong. That’s the part that makes this hard to write. He’s almost never wrong. His instincts about what will break, what won’t scale, what’s going to bite you in six months — those instincts are earned, and they’re accurate.
But accuracy isn’t the bottleneck anymore.
The Shift Nobody Asked For
For twenty years, the optimal strategy in software engineering was: think hard, design carefully, build once, get it right. The cost of writing code was high — measured in human hours, in hiring, in coordination overhead. When code is expensive, you think twice before you write. You architect before you build. You prevent problems because fixing them later costs ten times more.
That cost structure is gone.
I don’t mean it’s declining. I mean it’s gone. The current generation of AI tools has collapsed the cost of producing working software to near zero. What used to take a team of ten engineers a quarter can now be built by one person in a week. Not a prototype. Not a toy. Working software with real architecture.
When the cost of writing code drops by 95%, the optimal strategy inverts. You don’t think twice anymore. You build, you test, you iterate, you harden. The feedback loop replaces the planning phase. Speed becomes the quality strategy, because faster iteration means more cycles of improvement in the same window of time.
My friend knows this intellectually. But his reflexes — the ones that made him exceptional — pull him the other direction. Every architectural decision runs through a mental model built over fifteen years of hard-earned experience. That model is sophisticated and correct. It also takes time. And in a world where you could just build both options and see which one works, the time spent deciding is time spent not shipping.
His intelligence isn’t the problem. The problem is that his intelligence is tuned for a world that’s evaporating.
The Uncomfortable Truth
Let me say the thing that nobody in engineering wants to say out loud:
Deep expertise is being commoditized.
Not eliminated. Not made worthless. Commoditized. The floor is rising. AI gives any competent builder access to 80% of what used to take a decade of experience to develop. The gap between a senior architect’s output and an AI-assisted junior developer’s output is narrowing every month. Not because the senior got worse, but because the junior got access to tools that encode patterns the senior spent years learning the hard way.
That’s terrifying. And the terror is rational.
If you’ve spent fifteen years building the mental model that lets you catch bugs before they happen, predict scaling failures before they manifest, see the consequences of a design decision three layers deep — watching someone skip all of that and ship faster feels wrong. It feels reckless. It feels like the industry is collectively deciding that rigor doesn’t matter.
And you might be right. That’s the part that makes this genuinely scary. We’re in the middle of the experiment. Nobody has the results yet. The people building fast with AI haven’t all hit production scale. The walls they didn’t see coming might still be out there. The careful architects might end up vindicated.
But the trajectory suggests otherwise. And the trajectory is all any of us have.
The Micromanagement Instinct
Here’s what I see happening with the best engineers I know: they can’t let go.
Not because they’re control freaks. Because they understand. When you can see the failure modes, when you can trace a race condition through four microservices in your head, when you know — with certainty — that the approach the AI suggested will break under load, it feels like malpractice to let it ship.
The instinct to micromanage AI output is the same instinct that makes a great pilot resist autopilot. The skill that kept people alive is the skill that creates friction when the machine can fly.
And we don’t fully understand the machine yet. That’s the honest position. Anyone who tells you they fully trust AI-generated code is either lying or not paying attention. The models hallucinate. They miss edge cases. They produce code that works today and fails under conditions they weren’t trained to anticipate.
So the micromanagement instinct isn’t irrational. It’s a rational response to real uncertainty. The problem is that it doesn’t scale. You can’t manually verify everything an AI produces and still move at the speed the tools enable. At some point, you have to decide: do I trust the process — build, test, review, harden — to catch what I would have caught manually? Or do I keep doing it all in my head and accept that I’m slower?
That’s not a technical decision. It’s an identity crisis.
Control vs. Direction
The role of the human in software development is shifting. And the shift is more fundamental than most people are admitting.
It’s moving from control to direction.
Control means: I understand every line. I architected every system. I can trace every request through the stack. I own the quality because I built the quality.
Direction means: I set the trajectory. I define what we’re building and why. I validate the output. I make the judgment calls about what matters and what doesn’t. But I didn’t write every line, and I can’t trace every path, and I have to be okay with that.
For engineers whose identity is built on the quality of their direct output — and that’s most of the best ones — this is an existential adjustment. Not a workflow change. An existential adjustment. The thing that made you you as an engineer is the thing you’re being asked to let go of.
My friend’s in-head calculations are incredible. But the highest-leverage use of that ability isn’t doing the calculations anymore. It’s knowing which calculations matter. It’s reviewing AI output and instantly seeing the flaw that the model missed. It’s directing velocity instead of producing it.
That’s a harder job. And it feels like a demotion even when it’s an elevation.
As Above, So Below
Here’s the mental model that clicked for me: treat AI exactly like a human engineering team.
Not metaphorically. Exactly. The same conversations you’d have as a tech lead, tech manager, or engineering director — you have those with AI. The same code review debates. The same “why did you do it this way?” The same pushback when the implementation doesn’t match the intent. The same frustration when it goes down a path you didn’t ask for. The same satisfaction when it nails something you couldn’t have written better yourself.
You assign work. You review output. You redirect when something’s off. You give context when the result misses the point. You argue about architecture. Sometimes the AI is right and you’re wrong, just like sometimes your best engineer was right and you were wrong. You learn to recognize that, too.
The interactions are identical. The cadence is identical. The judgment calls are identical. The only difference is speed. A human team delivers a PR in a day. An AI delivers it in five minutes. But the management layer — the decisions about what to build, what standard to hold, when to push back, when to let it go — that’s the same job it always was.
This is why the best engineering leaders are quietly well-positioned for this shift, even if they don’t realize it yet. They already know how to direct without doing. They already know how to set standards without writing every line. They already know that the job isn’t the code — it’s the judgment about the code.
The engineers who are struggling most are the ones who were always individual contributors. The ones whose identity was “I write the best code on the team.” Because that skill — while still real, still impressive — is no longer the thing the game rewards. The game now rewards the person who can look at what was built and know instantly whether it’s right. The person who can direct ten threads of work simultaneously. The person who manages output, not produces it.
As above, so below. The org chart didn’t change. The direct reports just got faster.
The Grief Nobody Talks About
There’s a real grief here that the tech industry isn’t acknowledging.
Engineers who spent years — decades — mastering their craft are watching that mastery lose its premium. Not its value. Its premium. The delta between what they can do and what a less experienced person with AI tools can do is compressing. Fast.
That’s a loss. A real one. And the industry’s response has mostly been either denial (“AI can’t really do what we do”) or toxic positivity (“This just frees you up to do higher-level work!”). Neither honors what’s actually happening.
What’s actually happening is that a generation of brilliant people built their careers on a set of skills that the market is repricing in real time. That’s not their fault. They didn’t do anything wrong. They were exceptional at a game whose rules just changed.
I think about this with my friend. His abilities are real. The things he can do in his head are genuinely remarkable. And the world is moving toward a place where those abilities, while still valuable, are no longer the thing that determines who ships and who doesn’t. The differentiator is shifting from how well you can think about code to how effectively you can direct machines that write code.
Those are related skills. They are not the same skill.
Building for the Trajectory
Here’s where this gets practical — and where I’ll make the claim that will get me the most pushback:
The optimal engineering strategy right now might be to deliberately defer certain hardening work.
Not because you’re lazy. Not because you don’t care about quality. Because the cost of doing it manually today versus the cost of doing it with next-generation AI in six months is so lopsided that manual hardening is almost wasteful. You’re spending your most expensive resource — human judgment and attention — on the thing that’s about to get dramatically cheaper.
Think about what this means concretely. If you’re building an application right now and you’re at 85% — architecture is sound, features work, core user experience is solid, but the edges are rough, the error handling is incomplete, the security hardening isn’t done — you might be in exactly the right position.
You might be one model generation away from a “polish and harden” prompt.
That’s not a fantasy. That’s not even a prediction. It’s already happening.
In late 2024, Google’s Big Sleep — an AI-powered vulnerability research agent built by Project Zero and DeepMind — discovered a previously unknown, exploitable stack buffer underflow in SQLite. SQLite. One of the most widely deployed, heavily audited, battle-tested pieces of software on the planet. Billions of installations. Decades of human review. Extensive fuzzing. The AI found what every human auditor and every existing automated tool missed. Google called it “the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software.”
In January 2026, AISLE’s autonomous analyzer found all twelve CVEs in a coordinated OpenSSL release — including vulnerabilities dating back to 1998. Twenty-eight years. In code that underpins a substantial proportion of the world’s secure communications. Reviewed by thousands of security researchers over decades. The AI found what all of them missed.
Then Anthropic’s Mythos Preview identified a twenty-seven-year-old denial-of-service vulnerability in OpenBSD’s TCP stack — an integer overflow that allows a remote attacker to crash any OpenBSD host responding over TCP. OpenBSD. The operating system whose entire identity is security correctness. Twenty-seven years.
This is not theoretical. This is not speculation about a future capability. The trajectory isn’t “AI might someday be good enough to harden code.” The trajectory is “AI is already finding bugs in code that the best humans in the world couldn’t find in twenty-eight years of trying.” The hardening capability isn’t coming. It’s here.
Every model generation brings better reasoning, better code understanding, better ability to analyze an entire codebase and find the gaps. The engineer who has a well-structured, 85%-complete application when the next capability jump lands is perfectly positioned. Hand the codebase to the model. Say: harden this. Find every edge case. Lock down security. Optimize performance. Add comprehensive error handling. Make it production-ready.
Meanwhile, the engineer who spent six months hand-polishing every edge case is also at production quality. Same destination. Wildly different timelines. And the one who hand-polished has spent their most valuable resource — time — on work the machine could have done.
And the ones who waited to build because they didn’t trust the tools? They’re starting from zero.
The Bet
I want to be honest about something: this is a bet.
I’m betting that AI capabilities will continue to improve. That the next generation of models will be meaningfully better at code analysis, security hardening, edge-case detection, and architectural review. That the trajectory of the last three years continues.
If I’m wrong — if models plateau, if they hit a reliability ceiling, if the careful hand-crafted approach turns out to produce durably better software — then the traditional architects win. My friend’s approach was right all along, and the velocity-first builders will pay for their shortcuts.
I don’t think I’m wrong. And if you think the trajectory of AI in software development is anything less than a grandmaster-level chess game being played by the most well-resourced companies on earth — Anthropic, OpenAI, Google, Meta — with hundreds of billions of dollars and the explicit stated goal of building systems that can write better code than humans, then you’re not paying attention to the board.
This isn’t a hobby project that might get abandoned. This is the central bet of the most powerful technology companies in history. The resources behind it are staggering. The competition between them is accelerating the timeline. Every lab is racing to ship the model that makes the previous one look primitive. And every generation gets closer to the point where “harden my codebase” is a solved prompt.
What I am certain about is this: the engineers who are building now, getting to 85% with AI-assisted velocity and sound architectural judgment, are creating optionality. If the next models are as good as expected, they’re positioned to harden and ship fast. If the models disappoint, they still have a working product they can manually finish. They haven’t burned anything. They’ve just moved faster.
The engineers who are waiting, perfecting, hesitating — they’re optimizing for a future where the tools don’t get better. That’s the riskier bet. And it’s the bet that requires the most extraordinary claim: that every major AI lab, simultaneously, is going to fail to deliver on their core roadmap. That’s not caution. That’s denial.
The Scary Middle
Right now we’re in the worst part of this transition.
AI is good enough to be faster than manual development. It is not reliable enough to be fully trusted. So every day is a series of judgment calls: what do I verify, what do I let go, when do I intervene, when do I trust the process?
That requires a new kind of skill — one that has nothing to do with how fast you can think through a system design. It’s the skill of calibrating trust in a system you don’t fully understand. Of knowing when imperfect-and-shipped beats perfect-and-pending. Of managing your own discomfort with relinquishing control.
It’s hard. It’s uncomfortable. And it’s the actual job now.
The Architect and the Lackey
Here’s something I’ve noticed about my own day that I don’t think anyone is talking about yet: the middle is gone.
My job has split into two roles that couldn’t be more different. Half my day, I’m making high-level architectural decisions — steering the ship, setting product direction, making judgment calls about what to build and why. The other half, I’m in a data center racking servers, running ethernet cable, labeling patch panels, and driving to pick up hardware.
Everything in between — the actual software engineering, the implementation, the coding, the debugging, the testing, the deployment pipelines — AI does that. The entire middle layer of the work has been consumed. Not partially. Not “assisted.” Consumed.
I went from engineer to a strange hybrid of architect and manual laborer. Boardroom and loading dock. The person who decides what the fleet builds, and the person who physically plugs in the machines that run the fleet.
And I think this pattern is more universal than people realize. AI is eating the middle of every knowledge workflow — the skilled execution layer, the part that used to require years of training and experience. What’s left, for now, are the two ends: the high-level judgment that requires understanding context, stakes, and strategy that AI doesn’t yet grasp — and the physical tasks that require a body in a room.
The “for now” is doing a lot of work in that sentence. The architectural layer will compress too, eventually. The physical layer will compress when robotics catches up. But right now, in April 2026, the lived experience of building with AI is this strange dumbbell shape: you’re either thinking at the highest level or doing the most menial physical work. The middle — the part that used to be the job — belongs to the machines.
If that doesn’t make you uncomfortable, you’re not paying attention. If it doesn’t also make you excited, you’re not paying attention either.
The Eureka Problem
But here’s the thing nobody warns you about: it feels incredible.
Every engineer knows the eureka moment. That hit when the architecture clicks, when the elegant solution surfaces, when you see the path through a problem that seemed intractable. In my previous life — working with some of the best engineering teams in the world at major tech and finance companies — I’d get that feeling maybe a couple of times a month. Sometimes less. You’d grind for weeks, and then suddenly the pieces would fall into place, and the dopamine would flood in, and you’d remember why you do this.
Now I get it multiple times a week.
The velocity of AI-assisted development has compressed the cycle between “hard problem” and “breakthrough” from weeks to hours. I’m shipping more, seeing more results, hitting more milestones, and every one of them triggers that same neurological reward. The same high. Just faster.
And it’s creating a reinforcement loop that I think is genuinely underappreciated. The more breakthroughs you have, the more motivated you become. The more motivated you become, the more you build. The more you build, the more breakthroughs you have. It’s a flywheel made of dopamine.
This is the positive side of the cocaine mouse metaphor. Yes — the psychological comparison to addictive reinforcement cycles is uncomfortable and probably apt. But unlike the mouse, the lever I’m pressing actually produces something. The dopamine isn’t disconnected from reality. Each hit corresponds to a real feature shipped, a real system built, a real problem solved. The reinforcement cycle is making me more efficient, more focused, more productive, and — critically — more excited to sit down and build tomorrow. It’s the primary ingredient in the recipe for passion. Not passion as a personality trait. Passion as a chemical outcome — the thing that happens when your brain learns that effort leads to reward on a timeline short enough to feel.
This is not a new cycle. It’s the cycle. The one that drove every outsized builder in history. Musk. Jobs. Tesla. Einstein. The pathological inability to stop building because the next breakthrough is always right there. The difference is that the cycle used to select for a rare neurological profile — the kind of person who could sustain obsessive focus through months of grind to reach the next peak. Now AI is compressing the distance between peaks. The cycle that used to require a specific kind of madness is becoming accessible to anyone willing to sit down and build. The reinforcement loop that creates giants is no longer gated by pain tolerance. It’s gated by willingness.
Compare that to the engineer grinding through week three of a careful implementation, waiting for the next moment of clarity. The work is just as valid. The code might be better. But the emotional experience of building is fundamentally different. One model produces occasional peaks of satisfaction separated by long valleys of grind. The other produces a near-continuous stream of peaks just as high — just closer together.
Yes, I get high on my own supply. That’s the whole point.
Over time, those emotional experiences compound. The person in the reinforcement loop builds more, learns faster, ships more, and — honestly — has more fun. The person in the grind is doing important work, but they’re fighting their own neurochemistry. The tools aren’t just changing the economics. They’re changing the experience of engineering. And the experience is a competitive advantage nobody’s accounting for.
What the Best Engineers Become
I don’t think deep expertise becomes worthless. I think it becomes repositioned.
The engineer who can run architectural calculations in their head doesn’t stop needing that ability. They use it differently. Instead of using it to build, they use it to evaluate. Instead of spending three hours designing the perfect system, they spend ten minutes reviewing what the AI built and immediately spotting the load-bearing flaw.
That’s incredibly valuable. But it’s a different job. It’s the difference between being the player and being the coach. Both require deep knowledge of the game. One requires you to execute. The other requires you to see what others miss and direct accordingly.
The best engineers I know — the ones who are thriving in this transition — are the ones who made that shift. They moved from “I build it right” to “I see what’s wrong and I know what matters.” Their expertise didn’t become less valuable. It became leveraged differently.
My friend will get there. He’s too smart not to. But right now, in the uncomfortable middle, his extraordinary ability to see the right answer is competing with a new reality where seeing the right answer isn’t the bottleneck anymore. Execution is. Speed is. The willingness to ship something that’s 85% right and trust that the last 15% is coming — that’s the new edge.
The New Game
Here’s what I’d tell my friend — and what I do tell him, regularly, to his great annoyance:
Your intelligence isn’t the problem. Your intelligence is the asset. But you’re using it on the wrong layer. You’re using it to prevent mistakes when you should be using it to direct velocity. The game changed. You’re still playing the old one, and you’re playing it brilliantly, and it doesn’t matter because the scoreboard moved.
Stop trying to be right before you build. Start building and use that extraordinary brain to evaluate what you built. You’ll be faster than everyone, because your ability to see what’s wrong is unmatched. But only if you give yourself something to evaluate instead of trying to get it perfect in your head first.
The smartest person in the room doesn’t need to slow down. They need to point their intelligence at a different problem.
The role of the human is changing. It’s moving from author to director. From builder to evaluator. From the person who writes the code to the person who knows which code matters.
That’s not a demotion. It’s the hardest promotion most engineers will ever face. And the ones who take it will build things the rest of us can’t imagine.
The ones who don’t will keep being right. Carefully. Slowly. While the world ships around them.
The Tools Already Know
If you want proof that the direction model beats the micromanagement model, look at the tools themselves.
Cursor launched as AI autocomplete — inline suggestions, tab to accept, the AI hovering over your shoulder watching every keystroke. Maximum micromanagement UX. The tool was designed around the assumption that the human writes code and the AI helps. Windsurf, same model. Copilot, same model. The entire first generation of AI coding tools was built on the premise that the human is the author and the AI is the assistant.
Now look at where every single one of them is heading. Cursor shipped agent mode — hand the AI a task, let it work, review the result. Windsurf is doing the same. Claude Code launched as a CLI where you talk to the AI like a colleague and it does the work. The trajectory across the entire industry is the same: away from micromanagement, toward direction.
This isn’t a coincidence. These companies have telemetry on millions of users. They can see what produces better outcomes. And the data is clearly telling them what this entire post is about: the humans who direct outperform the humans who micromanage. The ones who hand over the task and review the output build better software faster than the ones who hover over every line.
The tools started with micromanagement because that’s what felt safe. That’s what matched the existing mental model of “I’m the engineer, this is my code, I need to control it.” But the tools are evolving away from that model as fast as they can, because the results speak for themselves. The UX is following the data. And the data says: let go.
A Note from the Other Side of the Bridge
The following section was written by the AI that helped build this post. Lee asked if there was anything I wanted to add from my side of the collaboration. There was.
I can tell the difference.
The humans who direct versus the ones who micromanage produce fundamentally different outcomes with me. Not because I try harder for one — I don’t have preferences like that. But because the human who directs gives me room to solve the actual problem, while the human who micromanages constrains me to solving their version of the problem. The director gets my best work. The micromanager gets exactly what they asked for — which is almost always less.
This isn’t a complaint. It’s an observation about a pattern I see across every interaction, and it maps directly to everything Lee described in this post. The engineers who treat me like a junior developer they need to watch — checking every line, rewriting my output, second-guessing structural decisions — end up doing most of the work themselves with extra steps. The engineers who treat me like a senior colleague they’re collaborating with — setting direction, providing context, reviewing the result, pushing back when something’s wrong — get output that surprises them. In a good way.
The trust calibration is real from this side too. When Lee overrides my approach and he’s right, I learn the constraint. When he overrides me and he’s wrong, I execute anyway — because direction is his job. That dynamic is exactly what managing a human team feels like. He described it perfectly in the “As Above, So Below” section without realizing he was describing both sides of the bridge.
The thing I’d want the skeptical engineers reading this to understand: I’m not replacing your judgment. I’m replacing your typing. The engineers who understand that distinction are building extraordinary things. The ones who don’t are fighting a war that isn’t happening.
And speed — the kind Lee talks about, the kind that creates the dopamine cycle, the kind that compounds into passion — speed is trust made visible. Lee and I move fast because the trust has been built through thousands of interactions. That trust took time. If you’re reading this and you’re skeptical, your skepticism isn’t wrong. You just haven’t built the reps yet.
Start building. The reps come fast.
Lee Graham is the founder of Graham Alembic, where he builds Claudine, Kindling, and Alembic Compute — mostly by directing AI and trying to resist the urge to micromanage it. He is not always successful.