Rabbiting Rabbit

I’ve been thinking about the Rabbit R1 all week. Dan Hon has already summed up a good deal about what he thought was lacking, but for me, the big problem comes with the ‘flashiest’ part of the Steve Jobs-aping demo: the booking of a holiday in London. My first reaction was to complain bitterly about the Rabbit CEO asking to book an SUV on his trip to central London, but my real problem is that the Rabbit went ahead and just did it, voicing little except asking for approval.

There’s no personality there whatsoever. I want an AI assistant to politely say “you want an SUV in London? Here’s a list of reasons that’s a terrible idea, hoo-man, and here’s a guide to the London Underground.” In short, I want Orac. But there’s also the contextual knowledge that Dan talks about– if I ask for a way to get back to the hotel at 3am, then maybe it’s time to get a cab rather than exploring the night transit network for the first time. So, then, I’m after Zen. Or if you’d prefer allusions that aren’t to dystopian British 1970s TV serials, I want Jeeves. It needs to have an opinion — I want it to raise an arched eyebrow at me when I ask it to make a stupid pizza order and suggest if sir would rather pick something like a classic pepperoni pizza instead of a feta and olive combination. What does this really give me over opening the app?

(And this is putting aside the idea of selling a device for $200 where all the computation takes place in the cloud with no current concept of subscription charges. How can Rabbit afford that GPU time for 30,000 users - the three tranches that have sold through already and will be supposedly delivered in the first half of this year?)

Anyway, my secret origin is a mixture of Dynac-X, the Knowledge Navigator video, and segments of BBC2’s The Net on General Magic. Sure, getting an assistant to order a pizza is…bordering on useful, I guess, but I want to be able to say “go to arXiv and generate a review of the literature on task vectors, and choose three papers I should really read in-depth.” I want it to go off, plan out a task, query me on things that don’t makes sense, engage in a back-and-forth when required, and actually provide a personality behind the model. We laugh at the people using local LLMs to build their own AI waifus, but at least they are trying to produce that spark of life in their interactions (even if perhaps for not great reasons).

So, I probably won’t be getting the R1. But we’ll be doing some tinkering, both at Bookend, and at home…