I think I pinpointed a misgiving I have about general collaborative LLM use: that the LLM will never stop and say "I don't know" or "I don't have this mental model, can you help me understand it". That's the fundamental limitation of being trained to plausibly (not correctly) generate text. read on scrawl