Thinking about thinking about AI
I’ve been gathering string for a while about all of the machine learning / LLM / generative AI1 excitement that’s been happening over the past year2 in the hopes of having something interesting to say. Thankfully, Robin Sloan has been deep in the work and has a piece that very much mirrors a lot of where I’m coming down (or probably more accurately, I’ve rather artlessly copied his takes).
The post is fairly wide ranging, in the best way, and this bit struck me in particular:
I think it’s really challenging to find the appropriate stance towards this stuff.
On one hand, I find critical deflation, of the kind you’ll hear from Ted Chiang, Simon Willison, and Claire Leibowicz in this recent episode of KQED Forum, appropriate and useful. The hype is so powerful that any corrective is welcome.
However! On the critical side, the evaluation of what’s presently before us isn’t sufficient; not even close. If we appreciate humility from AI engineers, then we ought to have some humility ourselves. Humility — and imagination.
An important fact about these language models — one that sets them apart from, say, the personal computer, or the iPhone — is that their capabilities have been surprising, often confounding, even to their creators.
AI at this moment feels like a mash-up of programming and biology. The programming part is obvious; the biology part becomes apparent when you see AI engineers probing their own creations the way scientists might probe a mouse in a lab.
The simple fact is: even at the highest levels of theory and practice, no one knows how these language models are doing what they’re doing.
Programming and biology! In my own mental models, I’ve replaced the biology bits with something more akin to raising a child, which is fairly hackneyed at this point. The training, the frustrations, the joy at something completely unexpected happening. Child rearing, though, happens so slowly over the course of a lifetime, whereas the language model advances are more like the humble Drosophila melanogaster, that workhorse of biotechnology that somehow manages to be cheap, fast, and good.
For my own part, something as hyped as generative AI almost requires me to turn up my nose in distaste (see also: crypto, NFTs, web3). And yet! My small explorations are some of the most fun I’ve had ever with computers. I’ve got a few decades worth of digitized thoughts and to be able to coax some meaning out of them with a handful of Python scripts (in a thus far unsuccesful effort to post not just more but better) is deeply intriguing. The idea of a universally accessible interface to the computer — an electric bicycle for the mind, as Simon Willison put it — is exciting not just to me personally but as someone who sees the often unfulfilled promise of technology that empowers.
I don’t love using “artificial intelligence” for any of this, even (ok, especially) the generative apps like ChatGPT. If we’re calling these things artificial intelligence now, what will we call the generation that really does understand what it’s saying? I know there’s already “Artificial General Intelligence” but that kind of proves my point. It’s like how everyone started calling inductive charging wireless charging — what happens if/when we do actually develop an ability to charge a device as easily as WiFi provides a network connection?
Semantics, I know, but if there’s one place to be pedantic about the semantics, it seems like this is it! Anyway, marketing. ↩︎
It’s been a pretty busy year for me personally, which has resulted in many drafted and few published half-blog posts. ↩︎