Living In The Future
Feb 26, 2023 · 1 minute readI won’t lie; having spent a large chunk of the weekend reimplementing parts of the ReAct and Toolformers papers on an 11bn parameter large language model, I get where people are coming from. I’m well aware of what happened with ELIZA, and I know enough about how Transformers work1 to know that ‘spicy autocomplete’ is not exactly wrong. And yet, after spending an afternoon wiring up prompts, helper functions and scaffolding code, to see the LLM reach out to the internet, get information, and summarize/report it in service to answering the question asked of it, well, it feels like witchcraft. Over 30 years of programming experience and you basically give a fancy graphics card a few examples and let the matrix calculations do the rest2.