Intro
Lots more AI stuff this week I read and watched. I went to a few talks too around AI and again some of the same themes came up. I’m really not sure what this all means but reading around change is the only real guarantee.
Highlights of the Week
He Was a Supreme Court Lawyer. Then His Double Life Caught Up With Him.
https://www.nytimes.com/2025/12/28/magazine/thomas-goldstein-supreme-court-gambling.html
During this run he won a total of about $50 million, and even though he had sold roughly 75 percent of his stakes to investors, he still personally cleared about $12 million. Flush with his success against Gores, Goldstein sat down to a heads-up match with a real estate magnate named Bob Safai — and this time he didn’t spread the risk by taking on backers. “I just have convinced myself, because I won $50 million in heads-up poker, that I am a savant at heads-up poker,” Goldstein told me. He promptly lost $14 million to Safai, all out of his own pocket.
Nothing really to take away from this more just a really good story. Well bad if you’re him or someone close to him, but a good story all the same. Warning sign about getting too caught up in gambling or thinking you’re way better than all others. He believed in himself, and still does by all accounts, but often that is not enough. Hedge your bets too I guess, or at least don’t bet everything you have without hedging.
The Bitter Lesson
http://www.incompleteideas.net/IncIdeas/BitterLesson.html
One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.
In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers. They said that ``brute force" search may have won this time, but it was not a general strategy, and anyway it was not how people played chess. These researchers wanted methods based on human input to win and were disappointed when they did not.
Over and over we see brute force scaling of compute blowing “smart” methods out of the water. Relying on Moore’s Law to get there in the end. There’s not much in the piece of the opposite though so I wonder are there many instances where scaling hasn’t managed to get past smarter algorithms.
# LLM Predictions for 2026 and The Normalization of Deviance in AI
https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026 https://embracethered.com/blog/posts/2025/the-normalization-of-deviance-in-ai/
The Normalization of Deviance describes the phenomenon where people and organizations get used to operating in an unsafe manner because nothing bad has happened to them yet, which can result in enormous problems (like the 1986 Challenger disaster) when their luck runs out.
Despite data showing erosion in colder temperatures, the deviation from safety standards was repeatedly rationalized because previous flights had succeeded. The absence of disaster was mistaken for the presence of safety.
Two articles linking to each other but the same message. History has shown when risk is made safe through one way or another, we become used to it and lose sight of the actual risks until it blows up altogether.
The Rise of Industrial Software
https://chrisloy.dev/post/2025/12/30/the-rise-of-industrial-software
Industrial systems reliably create economic pressure toward excess, low quality goods. This is not because producers are careless, but because once production is cheap enough, junk is what maximises volume, margin, and reach. The result is not abundance of the best things, but overproduction of the most consumable ones. And consume them we do.
A few articles around this theme where we’re seeing a shift in the value of code so what that means for us going forward is going to make things tricky
Steam, Steel, and Infinite Minds
https://x.com/ivanhzhao/status/2003192654545539400/?rw_tt_thread=True
At the beginning of the Industrial Revolution, early textile factories sat next to rivers and streams and were powered by waterwheels. When the steam engine arrived, factory owners initially swapped waterwheels for steam engines and kept everything else the same. Productivity gains were modest.
The real breakthrough came when factory owners realized they could decouple from water entirely. They built larger mills closer to workers, ports, and raw materials. And they redesigned their factories around steam engines (Later, when electricity came online, owners further decentralized away from a central power shaft and placed smaller engines around the factory for different machines.) Productivity exploded, and the Second Industrial Revolution really took off.
The Move Faster Manifesto
https://brianguthrie.com/p/the-move-faster-manifesto/
Moving slowly is often a choice: everyone involved has decided that speed is a subordinate requirement to talking to all the right people, writing all the right documents, and ticking all the right boxes. Sometimes that’s necessary. But the hardest part of moving fast isn’t execution; it’s deciding that it’s necessary, and then convincing people that it’s possible.
Fast, cheap, good is framed as a choice between two but many times you can get all three, you just need to know what to look for.
21 Lessons From 14 Years at Google
https://addyosmani.com/blog/21-lessons/
Expertise comes from deliberate practice - pushing slightly beyond your current skill, reflecting, repeating. For years. There’s no condensed version.
But here’s the hopeful part: learning compounds when it creates new options, not just new trivia. Write - not for engagement, but for clarity. Build reusable primitives. Collect scar tissue into playbooks.
The engineer who treats their career as compound interest, not lottery tickets, tends to end up much further ahead.
Lots of good stuff in this post for software engineers.
The Next Two Years of Software Engineering
https://addyosmani.com/blog/next-two-years/
Programming shifts: less typing boilerplate, more reviewing AI output for logical errors, security flaws, and mismatches with requirements. Critical skills become software architecture, system design, performance tuning, and security analysis. AI can produce a web app quickly, but an expert engineer ensures the AI followed security best practices and didn’t introduce race conditions.
Narrow specialists risk finding their niche automated or obsolete. The fast-changing, AI-infused landscape rewards T-shaped engineers: broad adaptability with one or two deep skills.
Overall it is a slight shift in skills and about knowing how to use the new tools effectively. At the minute for example, reviewing code is as much about making sure it is nice and readable, long term that we can come back and understand it. That the names are good, not too large functions, tests exist. However AI code looks good right out of the bat. But that does not mean good. The heuristic of nice readable code is good code doesn’t work anymore. It might be nice looking but bad code. It might not do the right job or the tests don’t test for the actual thing we want. Does it follow requirements.
We move on from being ones that write code to orchestrating agents. Its not the same work but its not boring work either. There’s still a lot going on. It probably will also involve another layer of abstraction where engineers are up a level, closer to architects, strategists and product managers operate.
What can we do about this then? One option is to focus on improving skills. Before you could specialise narrowly in a specific niche. That is no longer enough on its own. Now you need to be able to tie that in with other things so you can at the very least operate the agents working on adjacent things, rather than just shipping that off to another team
How to Be More Agentic
https://usefulfictions.substack.com/p/how-to-be-more-agentic
Assume everything is learnable
Most subject matter is learnable, even stuff that seems really hard. But beyond that, many (most?) traits that people treat as fixed are actually quite malleable if you (1) believe they are and (2) put the same kind of work into learning them as you would anything else.
As you might gather, I think agency itself is a good example. I learned agency late
This is a really popular word lately but is likely just a new take on an old topic. But in any case the core ideas behind it make sense even if they are in a new brand.
Where’s My Orbital Habitat
https://asteriskmag.com/issues/12-books/wheres-my-orbital-habitat
It might sound as if NASA was on the back foot, but historians suggest the agency was all too happy to turn space settlement research into a sacrificial lamb. Howard McCurdy, in Inside NASA, describes how post‑Apollo managers “learned to trade away blue‑sky studies to show Congress they grasped the new age of restraint.” And in The Visioneers, Patrick McCray reports that many within the organization came to associate O’Neill’s space stations with a “giggle factor” that undercut the importance of its mission.
Politics. The whole operation of Nasa is focused on keeping funding and the best way to do that has mostly been to show they’re serious engineers and scientists. They have these wacky concepts and demos every now and again to almost show that here’s the stupid stuff we’re not doing and to normalise the rockets and rovers and research they are doing.
Discarding the Shaft-and-Belt Model of Software Development
https://secondthoughts.ai/p/the-new-model-of-software-development
This awkward system was dictated by the fact that each factory could only afford one engine. Steam engines were complex, expensive machines, requiring constant attention. As a result, every power loom, lathe, or press had to receive power from a single engine. The shaft-and-belt system could not transmit power over long distances, precluding a single large floor, so factories had to be stacked into multiple smaller floors.
Small steam engines were not practical, but small electric motors were – they were cheap and low-maintenance. As a result, each machine could have its own motor. This eliminated the need for shafts and belts; power was transmitted over wires. Wires could run for a long distance, enabling flexible arrangements and allowing the factory to be laid out in a single floor.
Another post going into detail how software is changing and our mental models of the past no longer fit anymore. We’re going to see more and more cheap small software and less of the big complex software systems of old.
In Defense of Slop
https://newsletter.rootsofprogress.org/p/in-defense-of-slop
Whenever the cost of creation in a medium falls, the volume of production greatly expands, but the average quality necessarily falls, because many of the new creations are low-quality. They are low-quality because they can be—because the cost of creation no longer prohibits them. And they are low-quality because when people aren’t spending much time or money to create something, they don’t feel the need to invest a lot in it.
Don’t fight slop because it is inevitable.
The week ahead
Out of the 3 things I said I’d do last week I managed 2 out of 3 so not bad. Or at least started them. Skills I’ve not tried yet but maybe soon.
I think continuing and pushing the tools repo I started last week is one thing. Other than that I want to start writing more during the week so I don’t get to the end of the week and wonder what all these things are that I read. Just jotting thoughts down daily even rather than the mostly forgotten stuff at the end.