Wednesday, November 26th, 2025 at 2:32 PM
AI Coding checkin

Thinking about a couple posts in relation to AI-use.

One by Geoffrey Litt on using agents to prep a tutorial, particularly this:

Reviewing a big PR that's 80% correct is a huge pain in the ass...

but following a tutorial doc that's 80% correct feels like a fun speed boost for doing the work myself!

And one by Dan Abramov on moving away from agent use:

but to be clear, it wasn't that i didn't "understand" the code or anything. i understand it perfectly fine. but i guess that during normal development, there's a background index of the codebase being built in my brain. and trying to fix something with that index being cold is distinctly unpleasant

My use lately

I've stepped up my use of agents lately, mainly Claude Code, with some sampling of others. Partly its because I've been working on more conventional apps (versus weird canvas-y things) where stuff is easier to compartmentalize.

I've been trying to use it to fill out some of the gaps and roadblocks I've had for other projects. For me this mostly means having it do database stuff. It also worked through some deployment issues for Tanstack Start that I might have given up on. It did auth for Cosine which I might not have done otherwise.

Despite those success stories I identify a lot with Dan's post because it wasn't really feeling good. I tried Geoffrey's suggestion of having it build tutorials - I actually printed out the docs which was nice for reading away from the screen.

This helped! It still has some weird timing edges. Like if I change directions from the tutorial I'd had generated do I regen wait and read? (More often I had the agent do the changes in the moment, which often worked but again wasn't feeling great.)

One thing the tutorial did help me re-recognize is what a herky-jerky experience working with the coding agent is. Waiting for changes and trying to decide how much you need to inspect the diffs - so much nicer to be working through a doc, trying slight variations as you go. I end up with a lot more loaded into my head which also means I can think through the problem on a walk away from the screen.

It was a much better experience to work from the generated doc, but I can't say that it's a solved experience - depending on the task I still go back to having the agent do it - and the degree to which it often succeeds makes me question the worth of loading into my head...

The level of abstraction definitely factors in here. I think there is an argument that I could be working at higher level abstraction and not writing much of the code at all - but the code is how it works and without working through it I end up feeling disconnected. I think it also limits the ideas I have compared to having my brain chew on the task as I'm coding pieces of it.

Still a lot to figure out and I also feel like there's pressure to act like you have it all figured out but I want to work through things honestly.