Living In The Future

I won’t lie; having spent a large chunk of the weekend reimplementing parts of the ReAct and Toolformers papers on an 11bn parameter large language model, I get where people are coming from. I’m well aware of what happened with ELIZA, and I know enough about how Transformers work1 to know that ‘spicy autocomplete’ is not exactly wrong. And yet, after spending an afternoon wiring up prompts, helper functions and scaffolding code, to see the LLM reach out to the internet, get information, and summarize/report it in service to answering the question asked of it, well, it feels like witchcraft. Over 30 years of programming experience and you basically give a fancy graphics card a few examples and let the matrix calculations do the rest2.

  1. I know how the toys work too! ↩︎

  2. Well, not quite; it still hallucinates a lot of nonsense, so I guess there’s still a few years left of “please write code that actually makes sense” in my career. At least until GPT-4 anyhow… ↩︎