Four links I'm thinking about:
The Role of the Human Brain in Programming by Steve Krouse - I am super sympathetic to all of these points and arguments. Particularly the point that working on something 'from the bottom-up' informs and changes what you're thinking about. See also Max's slide: the goal informs the work, the work informs the goal.
God is a Parasite by Jason Yuan - I disagree with most (all?) of this one which I think makes it a good counterpoint to look at. Jason also has done a lot of work that I admire, and it's clear to me he's really wrestling with this stuff. He argues what AI's we're building are not tools but minds, and that their agency should be respected. I think they are tools - namely sophisticated interpolation and autocomplete - and that treating them as things with agency has potential to lead us to bad places. Seeing signs where there are none and following it down damaging paths.
There are ways that I like the metaphor of 'alien minds' Jason mentions, it's one of my favorite sci-fi themes (along with 'the language you speak determines what you can think'). But the line for me is still so interestingly fuzzy. Tools always influenced the type of art we make. Particularly because we know what training data and reinforcement feedback goes into these models - they seem to me sophisticated tools we've created that transform inputs into outputs probabilistically. Maybe intelligence emerges out of that - the models have certainly done things I didn't expect them to be able to. But my model is still that it won't, or at least that agency won't. I'm being imprecise and I want to try and write more preciesly later.
Malleable Software by Ink and Switch - a great survey of the dream of malleable software, the difficulties of it, and how AI could fit in and impact things. I get a real perspective shift from this - so much hype around the instant creation of user-prompted apps where this points out how limiting that is if those apps can't share data. And in the ideal world the concept of apps would disappear - we could compose tool primitives on the fly. The connections to woodworking and cooking (one thing I'm good at, another I struggle with, both physical) are particularly interesting to me. I want to think more about all of this.
Responses to Malleable Software by Devine and Cristóbal - arguments (my interpretation) for more education rather than new software and systems. (I'm being imprecise here as well.) Drives at the heart of the question of abstraction and indirection. Is it important to know how something works - I think so, to know why it works the way it does - but there are productive limits depending on the task.
All of these ideas swirling.