Doing some experiments with my overhead camera setup and the work-in-progress sampler app. Not sure where it's headed but I do like moving between physical and digital arrangement.
I have this vague idea of using image blocks as 'light sources' that cast low-resolution light based off their content into the surrounding canvas. Haven't figured out the system yet though.
Experimenting with recording timelapses. Niri has this neat dynamic cast target feature, though right now it's triggered by a hot key and i kept forgetting to switch it.
Capture the difference between two moments in time at https://ghost.constraint.systems/
Experimenting with playing slightly time offset stacked videos. To what purpose I'm not sure.
I tested out my Sampler prototype by converting highlights on a printout into a digital collage and recorded a video documenting the process.
The setup
The result
Experimenting with putting controls and info on the canvas - with some reciprocal scaling to make it more readable. Needs some more finesse but interesting so far.
Different keys to peel off snapshots or videos. Trying to think of the least obtrusive but clear way to show mark what is a video vs what is an image.
Been thinking about the fact that the 'peel' clone interaction goes all the way back to MacPaint.
Prompting to transcribe and summarize my tabletop notes. Real question is where do I put these for single source of truth - I think a database, maybe I use the s3 image key as the id? That should be unique and could come in handy, thought it may also limit uses for non-s3 images...
New on constraint.systems: Image Paint
Image pixels are copied and pasted in the direction you click and drag. An attempt to make something tactile like paint, but that "goes with the grain" of its digital nature.