- [ YES ]
- [ NO ]
The primary goal of an AI assistant is to enhance productivity and simplify interactions with technology.
By remaining a functional assistant rather than a friend, the R1 can prevent users from developing unrealistic expectations about emotional support or companionship, which are not within the capabilities of AI technology.
I reckon it could get there, with the right promptsâŚ
Donât forget to put no in the poll. I believe thats what you were suggesting. I see some people may take this question to seriously and emotionally. Iâll do better not to trigger individuals and get such an emotional response. Thank you.
Haha, no way, thatâs just crazy talk!
But seriously, if youâre feeling lonely, grab a coffee with a friend or give your mom a callâtheyâll love it!
On a more serious note, I genuinely believe a more conversational AI assistant could be a game-changer. Imagine having chats with an AI that feel like youâre talking to an old buddy! It could make interactions way more enjoyable and even help people feel more connected.
And wouldnât it be great if the AI could detect when youâre joking or being sarcastic? It would make the conversation feel way more natural.
So, what do you think? Am I onto something, or just talking to myself?
Never said I personally was feeling lonely but thanks for the advice. And Iâd advise you to not make these posts too personal as to mention individuals family members or to assume said individuals social life. Itâs fun to think we have all the answers but itâs simply not so. Letâs all do better
Hey @CREEPR, thanks for your feedback! I appreciate your point about keeping the discussion professional and avoiding personal assumptions. While making AI more conversational could enhance user engagement, itâs crucial to stay mindful of technologyâs current limitations and avoid unrealistic expectations.
Letâs keep exploring ways to improve user experiences within these boundaries. What features do you think could enhance AI interactions without crossing those lines? Maybe integrating more contextual understanding or having customizable interaction styles could be a start. This way, users can tailor the AI experience to their preferences while keeping it grounded. What do you all think about these ideas?
Iâm sure your opinions are valid. Iâll definitely take all these functional options into consideration.
They absolutely should add some way to have a fun chat with the r1. I understand this is meant to be a companion and help assist with various tasks, but it could get some more sarcasm/personality. My younger cousins were using it, and their main complaint was the fact that it didnât respond in a fun way to them. Iâd suggest adding some sort of prompt to activate a more sarcastic, fun conversation.
Fun update on this:
Beta Rabbit is currently capable of far more personality, provided you prompt it as such. For example, you can ask it to be sarcastic
So thatâs a step, but I think what youâre asking for is for more personality to be injected by default. And thatâs definitely possible. The key is to try and find the ârightâ personality for it. Or the right âdefaultâ and a way for it to be flexible.
If you think about it, you can imagine that itâs fairly trivial to have r1 assume a base personality that on the face of things, doesnât seem controversial. For example, letâs say âlight hearted and optimisticâ.
But then⌠Imagine you need some help from r1, because you have had a loss in the family. Or youâre asking about a violent event that occurred in the world. All of a sudden that personality is no longer appropriate.
So itâs a very nuanced thing to build in actuality with basically infinite edge cases.
Yes, and it would then be great if these personality settings could be part of the configuration settings. Nothing complex, just a text field that is âinjectedâ by default in which you can specify what personality (or other behaviour) you want the R1 to use.
Well, it would be nice if you had 2 or 3 predefined personalities, like Friendly Rabbit, Sacatic Rabbit and Rambo Rabbit or something like that.
While itâs a great feature in many AI systems, the R1 has been designed to currently refrain from altering its personality construct. If this functionality does become available, it will likely prioritize localization first, with casual or profane language not expected to be included anytime soon.
I believe that creating a morality analysis model for centralized sentiment and morality analysis would be a beneficial step forward. The team plans to whitelist teach mode for this exact reason: without guardrails, itâs like a WMD in cyberspace.
The same morality analysis model (MAM) could be used to ensure custom personality constructs remain within acceptable guidelines. As Simon mentioned earlier, it would prevent unacceptable responses to serious situations. The R1 should know when to break character, and rabbits should be aware when theyâre doing something dangerous or illegal so they refuse to continue.
The Rabbit Inc. team seems to have hinted at this with their stringent control over Visionâs morality. Occasionally, it might hallucinate, thinking a ketchup bottle is something obscene: itâs designed not to encourage dangerous behavior. For example, when r1 sees a bong, it will very clearly state that it will make no assumptions as to the use of the object, or that it does not encourage dangerous behavior. Safety is certainly a top priority, even if it results in some silly-sounding refusals from time to time.
Iâve been trying to rename R1 and teach R1 my name. I hate that it calls me âuserâ. I wish I could refer to my R1 by a given name. Iâve also tried to get beta rabbit to remember various custom commands for it to recall and modify notes. It hasnât worked at all, but this would be a great feature. For instance I would like to:
- ptt "beta rabbit, create a note entitled âBad Grocery Listâ which will be a bulleted list of items.
- ptt "beta rabbit, every time I say the phrase "That looks good, followed by âxâ, add âxâ to my note entitled âgrocery listâ as an item in the bulleted list.
- ptt âThat looks good Ninkasi Imperial IPA Variety Packâ (R1 adds item to my âBad Grocery Listâ with confirmation)
- ptt âThat looks good Cheese Burger In A Canâ (R1 adds item to my âBad Grocery Listâ with confirmation)
- ptt âThat looks good Dream Pop Prime Energy Drinkâ (R1 adds item to my âBad Grocery Listâ)
- ptt "beta rabbit, please recall my note entitled âBad Grocery Listâ and read it verbatim.
- And ideally R1 would recall the list verbally and show it to me on the screen in a bulleted list format. But this is NOT what happens.
Trying to accomplish this at present, r1 hallucinates, concocts new and irrelevant lists, and alarms âuserâ to rabbithole security breech when questioned.
And⌠I canât stand r1âs voice. It would be PRICELESS if we could change its voice.
Interesting. My r1 seems to know my name very well. It seems to pull it from my rabbit hole account name. This might seem silly to ask, but have you tried saying âremember that my name is âxâ?â
How about letâs start with the idea that r1 âcanât assist with creating explicit contentâ â This is what I encounter when trying to use Suno over R1. This sucks! What good is an assistant that WONâT do whatâs asked. I am a songwriter. Art is controversial. What good is an assistant (for an adult) that canât interact as an adult. I respectfully request Rabbit allow R1 to have an ADULT PERSONALITY. Iâd like it to be able to generate lyrics with the word âheckâ every now and then. R1 is such a pearl clutcher, itâs almost impossible to generate anything over SUNO that has a cutting quality. Lame
Is that an r1 issue or a Suno issue? Ie, are you able to do what youâre trying to do with the exact same prompt when using Suno on web?
Itâs an r1 issue for sure. I canât even get r1 to generate male vocal or instrumental tracks consistently. I have submitted in various posts here that Iâve gotten some disturbing results. After many many iterations on the same prompt for instrumental music, (I believe) R1 inserted data so that my suno product trolled me, lyrically telling me inst music is hard and inferior to lyrics ⌠âhow come I donât want lyricsâŚâ cray stuff like that. And EVERY iteration on a male vocal I can think of produces one or both suno products with a female voice.
My r1 prompt today that queued this particular post included the word âgayâ. r1 told me it could not generate explicit content. Bonkers to me.
I use suno regularlyânever have these issues
Interesting, Iâll feed that back to the team.
r1 is not inserting lyrics though, Suno is.
The reason this happens is because Suno is operated by LAM, and on their web interface, thereâs a check box that must be checked for the song to be an instrumental, and we havenât taught the system to be able to check that box yet - thatâs definitely on our backlog for fixes.