Debug outtake.
Debugging the sloth with accidental triangles
Progressive resolution based on difference from average. It's not wrong but it's not quite right.
Sloth compression attempt
Thumbs up / thumbs down
Screen recording of demo app where it records if I gave a thumbs up or thumbs down
Finished listening to the Diamond Age. Pretty fun! The primer was maybe a little different than what I expected from hearing it referenced. But man I will always love an experience that secretly teaches you how to run it yourself. Need to read the Andy Matuschak essay now.
Redid the way I do threads on this garden. Now a separate file with newline separted posts. Cautiously optimistic. Still more work to be done to make posting here feel easy.
Enjoyed an episode of the Eggplant podcast where they interviewed sylvie. A prolific maker of games including sylvie rpg. Where, if I understood correctly, they used multiples of 7 as a constraint, so the game world is 7x7 tiles and runs at 21 fps.
I'm going to give a work presentation on Constraint Systems, going to start drafting out the pieces here:
Constraint Systems is a series of side projects I've been working on for about five years now.
It's a collection of 30 alternative interfaces for creating and editing images and text.
A lot of us are trying to figure out how to use LLMs as a creative tool. There's a set of projects related to creative coding that I've been thinking about lately and want to round up.
Samin recenterly detailed their project SerendipityLM that focuses on "interactive evolutionary exploration of generative design models". It features a selection of generative art, mainly shader examples, generated through their process.
Turns out a self-healing image looks a lot like a fading paint brush. More to try.
GIF of an erosion experiment
GIFs are probably not going to be kind to this series.
Notebooks (Jupyter, Observable) are another related pattern here. Maybe a noetbook like linear flow but with horizontally scrolling generated functions...
This pattern actually is pretty similar to the flow for Copilot-like autocomplete. But I'd be interested in taking it out of the editor and also making the encapsulation clearer.
I guess the risk is the usual spaghetti-hairball nodes-and-wires issues. I should develop some sort of plan for avoiding that.
In recent LLM model experiments I've been thinking a lot about how to get codegen into the workflow. For a lot of tasks, conventional (javascript) code would be faster, cheaper and less prone to error than an LLM call. I'd like to have the LLM generate that code once -- then the game becomes how to encapsulate, verify and sometimes modify the code it generates.
It varies a lot day-to-day but I think it's safe to say I'm mildly stressed about the overall impact of AI. I've worked with machine learning for a long time now -- falling into with Fast Forward Labs ten years ago.
At their best, I think current developments encourage me to think about what I want -- out of work, out of computers. There's parts of some of my past work where I think the idea was part of what made it interesting to others, but also a big part was that I put in more work (as the Penn and Teller quote goes) than a reasonable person was willing too. The need to put in work in that specific way seems to be going away (the code could be generated instead).
There's still places to put in the work: carving the joints is partly me trying to work that out. In figuring out how to put systems together, and the experience of that past work will aid in that... but it does feel like it will be different.
Riding home on the train. The repo on my work computer isn't up-to-date, but since these are all markdown files I can write and trust it will work when I upload later.
A bit of a dip in my new writing habit, mostly because my parents were in town for the weekend. Also maybe have some sort of lingering illness. Hoping it's just a temporary blip. I'm still enjoying the idea of this blog.
Been trying to take some stock of where my work and projects are at. The pace of AI experimentation feels a bit exhausting, and a lot of it feels like hype. At the same time there do appear to be some movements towards steerability, to interaction at a more feature-based and interpretable level that are interesting to me. I'm still looking to carve at the joints and put the right pieces together in the right order to make something that feels empowering and extended, rather than a replacement. But it varies day-to-day how achievable and likely I think that is.
Two comedy clips I think about surprising often. Tim Robinson on how not everyone knows everything:
<figure><video width="640" height="480" controls src="https://grant-uploader.s3.amazonaws.com/2024-06-19-10-35-26.mp4" type="video/mp4" poster="https://grant-uploader.s3.amazonaws.com/2024-06-19-10-49-31-800.jpg"></video></figure>Harris Wittels on not being impressed by juggling:
Now writing this on the porch with the AR glasses. I think I finally got Network Manager set up so I feel more confident I can take these and connect to wifi outside of my own. Maybe I'll take them on their first train ride tomorrow.
I'm happy I've gotten as far as I did on this blog but I'm not quite sure how to use threads and untitled posts yet. Untitled posts feel weird even though part of the goal was to normalize that. Threads feel weird to, like it feels like I'm really declaring something important for it to be a thread. I want it to feel lightweight.
Writing this from the backyard using the AR glasses plus the mini PC in the cross-body bag. Trying to get more of a feel for it. Definitely still a lot of wires to track, and I don't trust things like the bluetooth keyboard to work all the time. The transparent mode is pretty, though.