Highlights of the Week
AI Should Help Us Produce Better Code - Agentic Engineering Patterns - Simon Willison’s Weblog
https://simonwillison.net/guides/agentic-engineering-patterns/better-code/
I like to think about shipping better code in terms of technical debt. We take on technical debt as the result of trade-offs: doing things “the right way” would take too long, so we work within the time constraints we are under and cross our fingers that our project will survive long enough to pay down the debt later on.
AI tools will allow us to do things differently and one outcome of code becoming much cheaper to write is old tech debt now becomes doable at a fraction of the old cost. So refactoring or code smells can be fixed now without having to try sneak it in to the sprint.
Every Layer of Review Makes You 10x Slower
https://apenwarr.ca/log/20260316
Every layer of approval makes a process 10x slower I know what you’re thinking. Come on, 10x? That’s a lot. It’s unfathomable. Surely we’re exaggerating. Nope. Just to be clear, we’re counting “wall clock time” here rather than effort. Almost all the extra time is spent sitting and waiting. Look: • Code a simple bug fix 30 minutes • Get it code reviewed by the peer next to you 300 minutes → 5 hours → half a day • Get a design doc approved by your architects team first 50 hours → about a week • Get it on some other team’s calendar to do all that (for example, if a customer requests a feature) 500 hours → 12 weeks → one fiscal quarter
So much of this made so much sense it is really worth the read. There’s no obvious solutions though but early days yet I guess. An obvious one is we can “shift left” now so we probably don’t have to be so opinionated about how code looks now, rather we can focus on higher level architectural changes which maybe don’t need a separate architect team to look at. Similarly the architecture team can focus more on the abstract interfaces between teams which is what they should have always been doing. We can dream at least.
The job of a code reviewer isn’t to review code. It’s to figure out how to obsolete their code review comment, that whole class of comment, in all future cases, until you don’t need their reviews at all anymore. (Think of the people who first created “go fmt” and how many stupid code review comments about whitespace are gone forever. Now that’s engineering.)
I also liked this reframing of code reviews to try automate themselves away. Easier said than done but maybe with AI code reviews some of that can actually be done
Agentic AI and the Mythical Agent-Month
https://muratbuffalo.blogspot.com/2026/01/agentic-ai-and-mythical-agent-month.html
Still, the brief mention of Brooks’ Law stayed with me. (The introduction glides past it far too casually.) Thinking it through, I came to conclude that we are not escaping the Mythical Man-Month anytime soon, not even with agents. The claim that “Scalable Agency” bypasses Brooks’ Law is not supported by the evidence. Coordination complexity (N2N2) is a mathematical law, not a sociological suggestion as some people take Brooks’ Law to mean. Until we solve the coordination and verification bottlenecks, adding more agents to a system design problem will likely just result in a faster and more expensive way to generate merge conflicts.
I guess the counter to this is if we reduce the human N then we might be able to, but the human now has to orchestrate so we’ve just shifted. An interesting concept but I’m not sure what it means.
2026 Staff Engineers Need to Get Hands-on Again
https://paulamuldoon.com/2026/03/10/2026-staff-engineers-need-to-get-hands-on-again/
One of the things that makes you valuable is your ability to weigh tradeoffs, informed by years of experience of how software gets built. You go into a meeting with product leads and say “This set of features will take six months to build, but we can cut this one feature and have something almost as good in 3 months.” But what took six months in January 2025 takes one month in March 2026. There’s no way you can know that unless you have hands-on experience building with these tools.
The problem with things changing so much is that it is very easy to lose touch with reality. Meaning we all need to try out these tools to see what is possible. I really do not think anymore that it is possible to ignore them and we see more and more stories of this realisation from extremely talented and previously skeptical engineers.
Comprehension Debt — The Hidden Cost of AI Generated Code.
A recent Anthropic study titled “How AI Impacts Skill Formation” highlighted the potential downsides of over-reliance on AI coding assistants. In a randomized controlled trial with 52 software engineers learning a newlibrary, participants who used AI assistance completed the task in roughly the same time as the control group but scored 17% lower on a follow-up comprehension quiz (50% vs. 67%). The largest declines occurred in debugging, with smaller but still significant drops in conceptual understanding and code reading. The researchers emphasize that passive delegation (“just make it work”) impairs skill development far more than active, question-driven use of AI. The full paper is available arXiv: https://arxiv.org/abs/2601.20245.
The deeper issue is that there is often no correct spec. Requirements emerge through building. Edge cases reveal themselves through use. The assumption that you can fully specify a non-trivial system before building it has been tested repeatedly and found wanting. AI doesn’t change this. It just adds a new layer of implicit decisions made without human deliberation.
The current trend in AI is to lean into big design up front or waterfall style planning where we design it all then tell AI to do the work. The problem is reality often does not fit into that plan and that plan falls apart once assumptions we’ve made in the plan turn out to be incorrect.