Intro

I’ve started trying to take up this week notes habit for a while anyway. Inspired by Simon Willison’s now defunct habit, I’m going to try keep this up for a while. I’ll try out a few different ways of doing it but for now the main will be to take what I’ve read in Readwise Reader, highlighted in Readwise, and noted down in Obsidian and try collate them all to one post. Rough and ready is what they’ll be until I can figure out what I’m doing.

Other than that I’ve read lots of the roundup posts this week which cover a lot of AI stuff. I don’t understand a lot of the technical details of it all but going forward I’m going to rely more on Anki to help get an understanding of some of these technical topics so I can maybe start understanding and be able to read the more technical ones. Of course understanding is nothing without using so more doing things is on the cards. I see a few using tools repos where they’ve created repos with loads of random tools for random things and that is probably one of the best ways of using AI. Get it to do what it is good at which is a small defined tool and scale linearly rather than a monolith that grows over and over. I want to think more about that though

Highlights of the Week

The Great Engineering Divergence

https://x.com/pauldix/status/2006423514446749965

Looking at software delivery, you could break it down into a number of things that have to happen: requirements gathering and customer feedback, writing issues, designs and specs, writing code, peer code review, performance testing and UX validation, safe production deployment, and monitoring the result. Code is only one aspect of this pipeline. If coding is 20% of the end-to-end cycle, making it 10x faster only yields ~1.25x overall speedup. To get 10x end-to-end, you have to speed up review, validation, release, and ops—not just typing.

I think this is an interesting take on how AI coding will speed up code, but code is not the whole job, nor is it even the majority of what a lot of software engineers do. Speeding it up a bit or even making it 10x faster, won’t speed up shipping of new features by the same amount because it just removes one bottleneck while creating new ones. Sure we might be able to take on larger features or do more, but all the other stuff now becomes important too. It resonates with me and my job as, while I’m not a true software engineer and in more an SRE type role, code is not the difficult part nor is it the main thing. It’s been like this for a while so even if I don’t need to write any code, figuring out how systems are behaving and even with code, figuring what code to write is so much of the actual work that speeding that 10x won’t change things all that much.

Joy & Curiosity

https://registerspill.thorstenball.com/p/joy-and-curiosity-68

Things are changing and all the tooling around code reviews is built on the assumption that the code was written by a human, that it took a lot of time, that it took a lot of effort, that it would be painful to reorder the commits, that it would be demotivating having to redo the whole change, that the change is is very valuable. But what if it wasn’t? What if it wasn’t written by a human and what if it’s just one of, say, give proposed changes that all try to do the same thing, because you started five agents and raced them against each other? What if we don’t have to worry about how often someone or something would have to redo a contribution? What if we don’t have to worry about in which order they produced which lines and can change that? We’ve always treated auto-generated code different from typed-out code, is now the time to treat agent-generated PRs and commits different? What would tooling look like then?

I’m seeing more and more of this where we’re entering a phase of software engineering where the old systems are creaking under the strain of new ways of working. GitHub and even git is no longer really up to the task and I think we’ll soon see new ways take off. jj has been there for a while but I think we’ll see more.

LLM Predictions for 2026

https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/

The Normalization of Deviance describes the phenomenon where people and organizations get used to operating in an unsafe manner because nothing bad has happened to them yet, which can result in enormous problems (like the 1986 Challenger disaster) when their luck runs out.

  • LLM Predictions for 2026, Shared With Oxide and Friends, Simon Willison’s Weblog: Entries

This is probably something to keep in mind in general amongst day to day. A really interesting and true thing. We see it in other things where if we feel safe, we take more risks making the overall situation less safe than before.

Reminds me of this from Farnam St a long time back

But the things we do to reduce risk don’t always make us safer. They can end up having the opposite effect. This is because we tend to change how we behave in response to our perceived safety level. When we feel safe, we take more risks. When we feel unsafe, we are more cautious. https://fs.blog/safety-proves-dangerous/

The Bitter Lesson

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers. They said that ``brute force" search may have won this time, but it was not a general strategy, and anyway it was not how people played chess. These researchers wanted methods based on human input to win and were disappointed when they did not.

Over and over with computing, just adding more compute has time and time again created more opportunity than trying to outsmart the system or recreate human reasoning or actions directly.

Keep the Robots Out of the Gym

https://danielmiessler.com/blog/keep-the-robots-out-of-the-gym

Think very carefully about where you get help from AI. I use a Job vs. Gym analogy.

  • If we’re working a manual labor job it’s fine to have AI lift heavy things for us because the actual goal is to move the thing, not to lift it.
  • This is the exact opposite of going to the gym, where the goal is to lift the weight, not to move it. In the first case we just want the output, and in the second the whole point is to do the work ourselves.

I fear it is too easy for us to just do the easy thing and ask AI to do all things, but something to keep in mind is that if you want to improve at something is to treat it as exercise like going to the gym. You’re not going to drive the 5k instead of running it because that defeats the purpose. An interesting way of thinking about things I’m using AI for.

Publishing Your Work Increases Your Luck

https://github.com/readme/guides/publishing-your-work/

Start anywhere, start on anything, start something. You’ll never come up with the perfect idea for an OSS library, a business, a podcast, or an article by just thinking about it. Start on something, today. It won’t be the perfect version of the thing you have in your head, but you’ll be in motion. Motion begets motion, progress begets progress. Pick the smallest thing you can do and get started.
Doing the work is the most important part. It’s the nucleus around which everything else revolves. What that “work” looks like, though, is entirely up to you! That’s the fun part. It can take any form and be in any domain. Wherever your curiosity or expertise draw you, dive into that.

Helped inspire me to do this here. Just start with something and get going. See where it goes. Doesn’t really matter where it goes just get going and adjust course once you’ve got momentum behind you.

2025 Letter

https://danwang.co/2025-letter/

The trouble with these calculations is that they mire us in epistemically tricky terrain. I’m bothered by how quickly the discussions of AI become utopian or apocalyptic. As Sam Altman once said (and again this is fairly humorous): “AI will be either the best or the worst thing ever.” It’s a Pascal’s Wager, in which we’re sure that the values are infinite, but we don’t know in which direction. It also forces thinking to be obsessively short term. People start losing interest in problems of the next five or ten years, because superintelligence will have already changed everything. The big political and technological questions we need to discuss are only those that matter to the speed of AI development. Furthermore, we must sprint towards a post-superintelligence world even though we have no real idea what it will bring.

The letter overall is interesting and I’ve written about it elsewhere but I like the perspective on AI, China, USA and how they all intersect. Seems the story of the last year and shows no sign of slowing down into the new one.

Jevons Paradox for Knowledge Work

https://x.com/levie/status/2004654686629163154

In the 19th century, English economist William Stanley Jevons found that tech-driven efficiency improvements in coal use led to increased demand for coal across a range of industries. The paradox of course being that if you assume demand remains constant, then the volume of the underlying resource should fall if you make it more efficient. Instead, making it more efficient leads to massive growth, because there are more use-cases for the resources than previously contemplated. The paradox has proven itself repeatedly as we’ve made various aspects of the industrial world more productive or cheaper, and especially in technology itself.

This is the question of the time. Will AI replace us (software engineers) entirely or will it augment us in such a way that there’s more overall needed. The current feeling is that it is the second way but nobody knows yet I think. The other things in this post seem to suggest that we’re going that way too where AI is just speeding up a small fraction of the overall work. Code is becoming cheap while everything else becomes more important now.

The Massively Disruptive, Totally Plausible Scenarios That Could Reshape the World in 2026

https://www.politico.com/news/magazine/2026/01/02/black-swan-events-2026-00708074

In the late 1960s, Catholic civil rights protests in Northern Ireland, a province of the United Kingdom then run by a deeply chauvinistic Protestant majority, prompted London to mobilize British Army troops on the pretext of helping local police pre-empt wider unrest. On Jan. 30, 1972 — “Bloody Sunday” — soldiers trained in combat rather than crowd-control shot 26 unarmed civilians at a protest in Derry, resulting in the deaths of 14.
The episode helped launch a low-intensity conflict that would continue for over 20 years between Catholic insurgents who insisted on Irish reunification and a Protestant community, backed by the Crown, adamant that Northern Ireland remain British. It first verged on civil war, then settled into a kind of ritual murder. The middle class, largely unaffected by violence centered in working-class areas, accepted civil dysfunction as background noise and went about its bougie business. British civil servants were morbidly content with “an acceptable level of violence” as P.J. O’Rourke marveled at “heck’s half-acre.”

I’ve never really understood the Troubles though they affected this county quite a lot in their time. This is an aspect I never really appreciated though, how they mostly affected normal working class people so the ones in power or in London never really cared all that much about them and took them as normal background noise so never wanted to fix them or do much about them.

Takeaway

  • Code is now cheap and getting cheaper
  • Jevon’s Paradox - what cheap code means for engineers

The week ahead

  • What are the things in Software that are now still or even more than ever important
  • Building a tools repo or other ways of building things taking advantage of AI. Mostly being able to work on lots of disjointed ideas in a single repo where they all get deployed and managed easily. As much an organisational tool for me as it is for AI. Purpose is to cut the time from idea to deploy
  • Claude Skills - lots of goings on about this but I’ve never tried in anger