Intro
I’ve been quite late with this one but at least it is out in some shape or form. It still contains the things I read during the week 6 but a few days late. A lot of AI stuff again this time. I’m not sure if I am or am not that interested in all this still. Some is good but the way absolutely everyone is reading and writing about the same stuff all means there’s a lot of stuff that isn’t that useful or only slight variations of others.
Highlights of the Week
The Only Skill That Matters Now
https://worksonmymachine.ai/p/the-only-skill-that-matters-now
Anyway, yeah, like I said earlier, Gretzky could already skate. He was incredibly agile. He could stop on a dime. Change direction mid-stride. His edges were so good he could literally dance on ice.
He didn’t become great because he predicted the puck. He became great because he could actually get to ANY position on the ice and be open. The prediction was secondary to the skating.
That’s where we are now. Except our skates are prompts. Our ice is context windows. Our edges are knowing how to talk to Claude or Gemini or whatever comes out next that makes both of them obsolete.
The only skill that matters is being able to adapt to whatever scenario comes towards us. So instead of trying to predict where things are going, focus on having a good baseline and being able to adapt. Sounds good in theory and all that but in practice this could mean optimising for some skill that becomes obsolete soon.
How I Use Claude Code
https://boristane.com/blog/how-i-use-claude-code/ Hard to pull good highlights out of this one without them being out of context but it is a good read, right now anyway, to show how others use the tool. The plan and research modes are the most interesting for me right now as once you get the plan right everything else downstream becomes easier. The trick is still like always to keep the changes small so you can iterate on them much more quickly.
The Final Bottleneck
https://lucumr.pocoo.org/2026/2/13/the-final-bottleneck
I too am the bottleneck now. But you know what? Two years ago, I too was the bottleneck. I was the bottleneck all along. The machine did not really change that. And for as long as I carry responsibilities and am accountable, this will remain true. If we manage to push accountability upwards, it might change, but so far, how that would happen is not clear.
The more things change the more they stay the same I guess. Maybe there won’t be all that much of a speed up from all AI due to things still needing humans in the loop at some stage and this is the part that has always been hard to make faster.
My AI Adoption Journey
https://mitchellh.com/writing/my-ai-adoption-journey This is another one with lots of good stuff about how to use these tools. The core nugget is to just start and try lots of things to see what does and does not work.
Interesting Ideas
https://www.derekthompson.org/p/the-11-most-interesting-ideas-i-read
A simple way to figure out whether to use AI at work, or in life, is to think about the difference between a gym and a job. At a gym, the point isn’t for the weight to be lifted, but for you to lift the weight. At a mere job, however, “the point is for the weight to be lifted.”
Use AI for the jobs in your life. Don’t use AI for the gyms in your life.
I think this is one of the core problems with AI. Using it to do your thinking and you’re going to be worse off. Use it to do your work and you’re going to see huge advantages.
The K-Shaped Future of Software Engineering
https://x.com/ian_dot_so/status/2013316676637294890/
Here’s what I think people miss: coding was never the hardest part.
The hard part is figuring out what to build and understanding users well enough to know what they actually need. It’s selling an idea to skeptical stakeholders. It’s making good decisions with incomplete information. It’s maintaining momentum through the long middle of a project when the initial excitement has faded and the finish line isn’t yet visible.
Staff+ engineers aren’t paid more in the company because they code faster or have a bag of programming tricks. They’re paid for judgment, for context, for the ability to see around corners. They are paid to ship, to own mistakes, to parallelize workstreams and remove risk.
Staff and above engineers never really wrote that much code anyway. They always had huge impact through direction of other resources. So this explains how AI coding tools might impact us, or not rather, where we’re telling it what to do and guiding it but we are the ones directing it.
A Treatise on AI Chatbots Undermining the Enlightenment
https://maggieappleton.com/ai-enlightenment
Sycophancy, meaning insincere flattery, is a well established problem in models that the foundation labs are actively working on . Mainly caused by reinforcement learning from human feedback (RLHF); getting humans to vote on which model responses they like better, and feeding those scores back into the model during training. Unsurprisingly, people rate responses higher when they are fawning and complimentary.
I never really thought about why they are the way they are like this but this makes complete sense now. If you train them to do things humans like, then of course they’re going to tell you everything they want to hear.
How AI Assistance Impacts the Formation of Coding Skills
https://www.anthropic.com/research/AI-assistance-coding-skills
High-scoring interaction patterns: We considered high-scoring quiz patterns to be behaviors where the average quiz score was 65% or higher. Participants in these clusters used AI both for code generation and conceptual queries.
• Generation-then-comprehension (n=2): Participants in this group first generated code and then manually copied or pasted the code into their work. After their code was generated, they asked the AI assistant follow-up questions to improve understanding. These participants were not particularly fast when using AI, but showed a higher level of understanding on the quiz. Interestingly, this approach looked nearly the same as that of the AI delegation group, except for the fact that they used AI to check their own understanding.
• Hybrid code-explanation (n=3): Participants in this group composed hybrid queries in which they asked for code generation along with explanations of the generated code. Reading and understanding the explanations they asked for took more time, but helped in their comprehension.
• Conceptual inquiry (n=7): Participants in this group only asked conceptual questions and relied on their improved understanding to complete the task. Although this group encountered many errors, they also independently resolved them. On average, this mode was the fastest among high-scoring patterns and second fastest overall, after AI delegation.
Like the gym/work analogy earlier, this is a good insight into what actually works and how we might use tools to do good for us rather than replace us. If you use it to just do the work and no more then you’re going to see skills atrophy. Maybe they get replaced by new better ones, maybe not, but that is not clear right now so best try keep your current skills at least a bit decent. And the way to do this is to interrogate the AI on what and why things are the way they are.