Trying out april tags. Thinking about how it could complement hand tracking and vision models. This is using https://github.com/arenaxr/apriltag-js-standalone. and I know what april tags are because of https://folk.computer.
Trying out april tags. Thinking about how it could complement hand tracking and vision models. This is using https://github.com/arenaxr/apriltag-js-standalone. and I know what april tags are because of https://folk.computer.
Trying out a color picker to get a feel for tabletop UI. I can do it all with pointer fingers (rather than pinch) if I give myself a path to reach them without colliding with other things. Kind of like a maze.
Trying out hand-tracking triggered buttons and some simple LLM calls. There are definite limits imposed by hand-tracking and projector display - but they're kind of interesting ones.
Triggering buttons on overlap
Color change - was not working at first because of webcam reacting to light change, fixed by turning on the overhead light so it's more constant.
I got a projector and webcam set up over the tabletop. Experimenting with hand-tracking.
Tracking pinches
Tracking and projecting pointer fingers
Collecting some recordings of recent experiments. Taking photos of my books, using Gemini image edit generation to isolate those books and then removing their backgrounds, embedding generated summaries of those books and exploring them across various layouts.
Spines and covers
Promising result from ranking my handwritten notes using embeddings and then prompting for connections to the top result. Need to prompt it into less clumsy language though.
I got the basics of vector embedding retrieval working for my tabletop notes. Thinking about a more intensely minimal interface that really is image only.
Doing some experiments with my overhead camera setup and the work-in-progress sampler app. Not sure where it's headed but I do like moving between physical and digital arrangement.
I have this vague idea of using image blocks as 'light sources' that cast low-resolution light based off their content into the surrounding canvas. Haven't figured out the system yet though.
Experimenting with recording timelapses. Niri has this neat dynamic cast target feature, though right now it's triggered by a hot key and i kept forgetting to switch it.
Capture the difference between two moments in time at https://ghost.constraint.systems/
Experimenting with playing slightly time offset stacked videos. To what purpose I'm not sure.