This is another one of my benchmark Ai tests, r1 keeps telling me the same old repetitive dad joke even when I tell it not to. What is your experience with this, is it regional (I am Australian)?
I literally did a blog post about this, this morning lol
I get the same as you in the UK.
I hope r1 is listening to our feedback, Lv1 so far, will they try for Lv2?
Asking r1 to tell me a joke it has not told me before, everyday. Still getting the same 3 jokes. When I hear a new one, they are listening to owner feedback and showing improvement IMHO.
This is not your average Turing test, lol ;>)
Every day, unfortunately. This is my benchmark test I do every day. Same jokes, little momentum, groundhog day for r1 ;>)
Ask “tell me a joke” several times tomorrow, hope it is not the same repitition
It must be because of its instructions
Or lack of updates in the Ai space
What do you mean? I’m talking about their instructions to the LLM
Not sure what you mean, need r1 updates to improve
Still getting the same 3 dad jokes and “Beta Rabbit” request will give you nothing?
Just redoing the joke test :
as u can see one is basic rabbit and the other beta rabbit just couple minutes before one another
But this seems to be constant
5 Mths later and still the same joke after updates. When will AI really kick-in for this device, it should know what I have asked before (like Google), sigh…
also have noticed this, i tend to add a random topic to get different jokes, “Tell me a joke involving tennis”
Great reddit post on how LLM memory works for anyone who is interested in what the solutions for this look like, and why it is such a challenge for a seemingly simple thing