Escaping Groundhog Day with Agentic Coding


Escaping Groundhog Day with Agentic Coding

The Liquid Engineer – Issue No. 42

I did a lot of coding and experiments with Claude Code in the past weeks. Once the initial thrill of the speed wears off, frustration kicks in. I felt like I was in the movie Groundhog Day. The excellent actor Bill Murray wakes up every day in a hostel on the exact same day. The world around him repeats and only he remembers yesterday. He’s stuck in an endless loop and has to experience the same day again and again.

When you start Claude Code, it doesn’t know anything about you or your project. Each time you want a productive session, you have to onboard Claude to your working methods and expectations. For the first weeks this was fine, because it’s what we’re used to in human interaction. If there’s a new colleague in the office for the first weeks, there’s still a lot of friction. After a few weeks the new colleague understands the ways of working and it gets less friction. For Claude Code it’s always day one.

But there’s the Claude.md file for storing information. Claude reads it automatically at the beginning. This should be easy. Just put all the necessary information for your new “colleague” in that file and he is automatically onboarded, right?

I started with what I’d tell a new colleague as well. Please write a diary or notebook of what’s important and what you notice and put it into the Claude.md yourself. Claude is a willing servant, so he did that. The file filled up quickly, leading to a warning from Claude Code a week later: With over 40,000 characters in the Claude.md I would pollute the context window.

The Context Window Problem

Claude has a limited context window. Like a human, it can only remember so many things. The way I imagine the context window is similar to how humans work.

A productive coding session for me looks like this. I get to the desk after breakfast: My stomach is full and my brain is ready for some work. I familiarize myself with today’s task to build up context. Then I have a few productive hours of work. Hopefully I have solved the task, because I’m getting hungry again and my brain needs a break. It’s getting filled with too much detail. So I save my work, make some notes of what I learned and where to continue, and go for lunch.

LLMs work the same way. They have a limited context window. Once filled, they have a function to compact the context, which might or might not work. For me, it rarely works. It’s usually better to find a closing point with the agent before the context window is exhausted, save the work, and document the key learnings for the next session. Then start a fresh agent. This is dangerously similar to how humans work as well.

Back to Claude.md

So, going back to the Claude.md. Having too much information in the file means your morning consists of 3 hours reading before you can get started doing your work. This will likely not be very productive. You’re exhausted before getting started. Onboarding your agents is an unsolved craft with agentic AI.

I went from 0 to 40,000 characters, back to 0. Now my Claude.md looks like this. It’s for establishing the high-level, non-negotiable principles of the project. The ‘laws of physics’ for how this works here. These rules, if broken, cause the most friction and waste the most time.

  1. The Claude.md files are not to be touched by the agents. Only when explicitly requested.
  2. This repository has a pre-commit hook that prevents commits with linter or unit test errors. Circumventing this is forbidden. Instead, fix the tests and linting.
  3. This app doesn’t need backwards compatibility. The goal is a simple solution.
  4. Avoid defensive fixes. In this code base, there’s usually a right place for the fix. Take your time to locate it and suggest fixing it there.
  5. Add a unit test first, if applicable, to determine if your fix resolves the problem.
  6. This project supports macOS on Apple Silicon and Linux on amd64 and arm64.

Kent Beck put it nicely:

“The whole landscape of what’s ‘cheap’ and what’s ‘expensive’ has shifted… We just have to be trying stuff!”

Happy experimenting!

What I Learned this Week

Interview with Senior DevOps engineer 2025. This interview with a fake boomer CTO hits a nerve. 😂 VIDEO

Enshittification is my word of the year 2025. LINK

I have high hopes for Qwen3-Coder as my next favourite open source model. And I love watching images of pelicans riding bicycles. LINK

What to Print this Week

This newsletter started out on 3D printing. If you haven't had any contact with it, you should, it's great! Here's the most interesting and fun projects I saw last week.

I'm warming up to the idea of a book nook in my book shelf. But needing 8 colors is just a lot

Japanese Alleyway Book Nook

It's just... a wonky chest of drawers.

Wonky Chest Of Drawers

I really don't need another ping pong ball in my life. But this one is version 3.0 and has 28% more ping!

Airless Ping Pong Ball 3.0

Hi 👋, I'm Stefan!

This is my weekly newsletter about new technology hypes in general and AI in specific. Feel free to forward this mail to people who should read it. If this mail was forwarded to you, please subscribe here.

Stefan Munz, www.stefanmunz.com
Unsubscribe · Preferences

The Liquid Engineer from OnTree.co

Founder of OnTree.co. Helping you own your AI and escape the sticky, overpriced SaaS trap. Join the movement 🐣

Read more from The Liquid Engineer from OnTree.co

DHH is into Home Servers, too The Liquid Engineer – Issue No. 49 Home servers are back and many cloud computing offerings are a complete rip-off: DHH discovered the same seismic changes this year, and he's a genius marketer. David Heinemeier Hansson, or DHH in short, must live in the same social media bubble as I do, our most important topics overlap this year: home servers are on the cusp of becoming a serious alternative to cloud offerings and the cloud is turning into an expensive joke....

Open Models And Local Inference Are Back In The Game! The Liquid Engineer – Issue No. 44 Open Models and Local Inference are back in the game! For a few months, it seemed closed models would outpace open models. Thanks to Chinese labs, the race is open again! And with the Framework Desktop, the right hardware is near! The last months were hard for open model enthusiasts like me. There was a clear and distinguishable gap between the big closed models and the open models. Google’s Gemini 2.5 is...

The Lethal Trifecta For AI Agents The Liquid Engineer – Issue No. 43 Simon Willison published a post a month ago, which is already one of the most important blog posts of the year. With the rise of AI agents, the problem described will not change. But we’ll see more practical demonstrations of it, leading to massive problems. The gist is this: There’s a lethal trifecta of risk for AI agents: untrusted content, access to private data, and external exposure. Here’s what each part means and why...