Yes, that would be wonderful, especially If r1’s ACCENT could be changed to my location for example, Australia, r1’ voice accent & having the option of a MALE or a FEMALE voice in one’s on country or as near to it as possible. On some of my smartphone apps I sometimes have the option to chose a male or a female voice & the accent. Though many apps don’t give this option.
Consistently, r1 cannot remember my name. Just now I tried your exact syntax. I will let you know how it works.
I’m not sure that people are ready for their pocket AI device to introduce in a similar manner as:
“I am the Knight Industry Two Thousand. You may call me K.I.T.T. for short.”
It’s a bit too much anthropomorphizing for people to deal with, they’d become emotionally attached to their small orange pocket pal.
Works just as well for me.
People even become emotionally attached with soft toys and cars.
I’m not sure that the R1 was designed to refrain from altering its personality construct? It’s quite easy to do; just open the terminal and enter “for the next response, please respond as if you were …”. Then get back to the voice prompting and see the effect. It’s temporary though.
As with most (all?) LLMs the R1 has a default response style, but it can be set explicitly if needed.
Setting personality or suggesting a persona is common practice in LLM prompting to improve the quality of the response. The model uses it to focus the results. Using voice prompting it is less practical /natural for this additional context, so having a way to set a standard response context in terms of expertise or behaviour would help to get it on par with other models.
I don’t see an issue in having AI simulate a conversation partner. I use it to have it challenge my opinion on controversial topics that I would like to solidify before using it with real people with real emotions. In general, it helps to sharpen your thoughts, or process emotions, not unlike writing a personal diary would do. Not as a replacement for people but as an additional “tool”.
ChatGPT will do this for you, if you instruct it to. The pi.ai chatbot has a conversational personality context set as its standard response style (but can be instructed to be less empathic and conversational).
The R1 does not seem limited in this respect or explicitly designed to avoid empathic or conversational responses, it’s just not easy / intuitive to instruct it to tailor the response style because of the voice prompting.
Absolutely true! r1 doesn’t 100% refrain from all personality change requests, but it does for many, or intermittently agrees to do so. Some of the most common refusals I’ve seen is a refusal to be something along the lines of:
- Sarcastic
- Sassy
- Localised (use a regional dialect/ another language)
- Rude/ abrasive
There are great arguments for why this is currently something the r1 refuses to do, such as Simon’s comment about tricky edge-cases, but generally the r1 will deny requests that fall short of “Professional American English.” I agree with you wholeheartedly, though, that having a conversational partner challenge your ideas is hugely beneficial for one’s critical thinking and deliberation skills! I’ve got a model I’ve been training and refining called “Antithesis” for the express purpose of being an abrasive, rude model that refutes nearly any point you bring up.
Think blowing up the sun is a bad idea? How naïve of you! The universe craves a lower entropy state, what authority do we have as humans to deny it that? If anything, we’d be bringing more balance to the universe! /(+explicit and rude language)/
Overall, the r1 device’s base model may hallucinate from time to time and allow for stark changes in personality, but only beta rabbit seems to be able to remember these changes. Your perspective that it’s not impossible to change this, but more so just difficult is a fair assessment.
Is it possible that Beta Rabbit is subject to occasional mood swings? At least I think I can sometimes tell from the tone of r1`s voice.
Beta rabbit cray
It’d be nice to get even a little bit of coherence from the r1. r1 beta rabbit hallucinates off the rails but it’s going to be the next push?! Lord help us.
I have always wanted a Vector 2.0 AI Robot Companion (https://ddlbots.com/products/vector-robot) and hoped the r1 would exceed this product. Your thoughts (I know the Vector 2.0 is pretty old now…)?
FYI: I think this topic indirectly hits the same note. https://forum.rabbitcommunity.tech/t/hold-down-the-ptt-button-the-first-time-and-r1-immediately-assumed-that-you-wanted-to-chat-with-rabbit/12812
I had never heard of “Vector” before. It looks incredible and impressive on so many levels. Hmmm…Thanks for sharing.
Hey nice, I have the little game-brother of Vector (Cosmo) *lol
It’s just a toy, of course. The programming interface was very interesting for me, it’s just a shame that Anki doesn’t exist anymore…
Also nice what magic picture.
That’s awesome. So it’s not active anymore? When were the Vector and Cozmo’s released?
So Cosmo still works quite well! because it works offline!
i think 2019…?
For this reason, I’m trying to motivate the developers of r1 to outsource as much as possible to the r1 devices, everything that doesn’t put too much strain on the battery and that r1 can do without becoming slow.
My thoughts are simply about the distant future.
Okay, admittedly that’s a downer, so a round of carrots for everyone!
No, but examples like Cosmo prove that many things are possible…
I had not heard of this robot, thanks for that! I believe there have been some developments, the Loona robot seems a pretty big improvement. Still, these do not seem to be gen ai driven robots, they have a limited, pre-defined set of interactions and a seemingly randomized number of moves it can do without interaction. So when you’ve seen all the tricks it can do the rest is repetition.
The Loona has a “ChatGPT” mode but that is really a separate mode where you simply use the robot to interact with the ChatGPT LLM. The robot does not move in that mode, so ChatGPT is not controlling the robot.
That’s correct, these robots don’t have much to do with AI, but the “magic” in them still makes them seem alive and that’s a nice thing because such actually simple functions could fill certain gaps in AI devices like r1, which would round the whole thing off all the better and in the best case nobody would notice the “magic” in them and even if they did, it wouldn’t matter at all in the end because it would just be great.