Companion as companion

Hello!

I am loving my rabbit so far. Thanks to all for this beautiful product.

I have a question. Have you manage to use rabbit r1 to have conversations like a close friend would do? Like having interactions, reflecting, playing games, without just triggering the ‘search browse internet’ mode?
I have tried to prompt it like that with medium success, and works for shortime.

For example, exploring a concept, I asked to explain it to me, did a web search, then asked to rephase in simple terms (not bullet points), then asked to to help me get it how it applies in my life (gave some context), and here then perfoms another search with bullets points, then asked to make a trivia of it, sometims it does sometims it searches for ‘trivia… + concept’

I would love to talk with r1 like so; like having besides “browsing mode”, “lam mode”, a “companion” mode just to chat. This would be a killing feature, like heypi was in the beggining, very addictive! Maybe I should move this to ‘requested features’

Anyway, have you archieve to make be more close, conversationalist, friend, to know about yourself, etc. tc.?

4 Likes

Hello and welcome - I also want a more conversational approach. The walki talki way of interacting migh be good in some context but not personable. There is a very good thread somewhere on here in the past week or so where the concept differing modes as you suggest are have been explored.

2 Likes

I agree i would love it if we could have modes or ways to customize our assistant ourselves. I think conversational flexibility could improve the user experience a lot.

Since the R1 system is not always intelligent enough to recognize what tools it should use, using fixed keywords might help with that.

This might give us the freedom to conversate in any way we like and use tools when we actually want to.

3 Likes

One solution I have found is to create a code example and have the rabbit recall the note, then instruct it you will give it information related to what it recalled, and ask it to run the “app.” It’s important to note that the variability has to do with the way the language model is behaving non-deterministically. So this solution is mostly reliable but it can allow you to produce desired output most of the time.

First construct the code

This can be done by voice commands such as “create an interface that accepts title is a string, content is a string, and an image url.” referring to the programming concepts in the programming language of your choice.

Then have recall retrieve the note

{ptt} recall my <journal entry name> note

Instruct rabbit to use or gather information related to that note

{ptt} Gather all required information related to it. In particular, <the content of your question>

If it doesn’t display it immediately, have it display (or other relevant action based on the code)

{ptt} display as a MediaCard

media card example

Future suggestions:

Rabbit might think about allowing structured output. There was discussion by rabbit team about custom ai-generated interfaces where the rabbit learns how you like to interact with it and how you like information displayed (buttons, images, cards, multi modal objects, and perhaps even custom interfaces themselves). This could be a potential area for rabbit to focus efforts.

3 Likes