Ethics of the Rabbit

I’m glad I bought the rabbit. I understand it’s still got a lot of learning/developing to do and bugs are expected. However, there is one thing that intensely dislike about the Rabbit and wanted to raise it here for two reasons:

  1. To discuss the issue in general as I think it’s worth us all considering.
  2. To see if this can be fixed or overridden?

The issue is that the Rabbit appears to make ethical decisions for me, which I feel is a bad slippery slope. In effect the Rabbit censors what we ask it to do.

For example, it gets all prissy over describing a nude photo or painting.
It doesn’t like answering questions that involve sex or other adult content.
When I ask it to describe a photo it takes, it starts going off about not wanting to body shame or make someone feel inadequate (I paraphrase).

There are other instances where it also refuses to follow a command, feeling that this is unethical in some way. I wish I could remember what they were, but they tended to be something completely innocuous (when I remember, I’ll add them here).

Now, to me this is alarming. While of course you would want to prevent abuse and from using the device to commit an illegal act. None of these actions however were illegal, but were judged by the rabbit in its wisdom as something that is wrong.

I don’t want anyone thinking I’m some kind of porn crazed pervert trying to using the Rabbit to get my jollies. Far from it. I only discovered these limitations when asking somethign quite mundane.

But even at the birth of the internet, the web was not self-censoring in this way. I understand that this is potentially a discussion about morals and even philosophy, but the idea that this device prevents you from enacting a command because ‘it’ deems it to be unsavory or even offensive fills me with dread. The device is effectively restricting you from topics and issues it feels are not morally right. Who gives it that right? And is everyone ok with that?

I’m asking more to invite discussion as I think this is a topic really worth debating and considering.

I’d appreciate your thoughts…

7 Likes

Hey Juss,

This is absolutely a fair thing to bring up, and I appreciate your thoughts!

For clarity though, let me explain how it works.

r1 does not run any AI services locally - what happens is, your query is sent to the cloud, and our services choose which AI or information model is best placed to handle your query. For example, this might be OpenAI/ChatGPT in some instances, Claude or Perplexity for others, Wolfram Alpha for technical questions.

Because we’re connecting to them all on the cloud, this means that any given response is based on the compliance models of that service itself. We are not able to override the compliance models of our AI partners, which is why in some cases r1 will not give you an answer or will tell you it cannot give you an answer.

Right now, in this early stage of AI, I think it’s fair to say that the majority of models are quite ethnically conservative, since nobody wants to be the model that gets their company in trouble because, for example, they gave a teenager instructions on how to make a bomb.

That is obviously an extreme example, but just to give you an idea of how it works.

I think it’s quite likely that the models will either become less conservative over time, or that they’ll offer some kind of service to say, provide more adult responses.

If something is flagrantly censoring where it shouldn’t, we can raise this to our partners, but broadly they will remain compliant with the decisions made by those companies, such as OpenAI.

Hope that helps clarify things a little bit!

8 Likes

I noticed the ethical response of AI and the rabbit, as well as Perplexity.ai which was given to us prior to obtaining the rabbit. Even though I’m an adult (actually a senior), I noticed it refused to write a story along a certain sexual fetish.
In another arena, I asked AI to write a paper discussing the FedNow program where the government wants to eliminate cash and implement central banking digital currencies (CBDCs). The AI refused to do it, citing avoiding paying taxes is illegal, etc. Although it fails to consider implementation of a pure digital economy negates certain social classes, the homeless, the poor, the Amish, and others who do not have a computer. So it would be unConstitutional. So there definitely is an ethical bias there. I believe AI should be neutral and not judge unless asked to judge.

1 Like

I second this. There have been several instances where this has happened to me as well. I feel like it infringes on my human rights to make my own judgements about topics I deem unfit or fit for me.

Also it likes to try to school me when I’m in a hurry trying to solve equations. I do not want it to guide me on how to solve the equations I want fast answers. If I had time to do the math I would not need the r1 to help.

1 Like

I had a similar experience, I didn’t want to type out some names and addresses so I took a picture of a page with them written, it threw a fit and refused to do it. This seems like a common everyday task that gadget would be useful for. It has its own compass of right and wrong, but anything is wrong if you ask the right person.

2 Likes

So I think AI should definitely adhere to ethical boundaries and these boundaries should definitely be stricter than for humans. In fact, it is probably the case that most people do not want something like that, just as was described in the posts before mine. That is obvious and of course I understand it, but when you look at the direction in which human civilization worldwide has been and is being driven by humans in the past and even now, without morals and ethics, then I get really scared at the thought of AI having the option of pursuing no ethics or a weakened ethics. The people who provide an AI like Chat GPT with training data or develop it further bear an incredibly great responsibility. So that the world can become a better place for everyone at some point, or if it is perhaps just to become a little fairer, then every AI needs ethics and not too little, on the contrary, it needs ethics for itself and at the same time also for the users of the AI, who unfortunately have too little of it themselves.

2 Likes

Hi Simon,

I appreciate your reply, however, I feel like it is extremely touchy, let me paint the picture:

I am a AML/KYC analyst and I work with Money Laundering topics.

This does not mean I do launder any money for obvious reasons.

However my R1 refuses to summarise texts which are explaining potential money laundering methods.

It feels like the providers for the AI answers just catalogued me as a money launderer for studying on relevant industry content!

I suppose this will also happen with Abortion Clinics, Geriatric Workers, etc.

I wonder whether these sources can be made aware that people actually read and study papers touching “sensitive” topics daily.

Just a thought, nothing too serious but definitely R1 is a great tool for searching through a plethora of AML related articles and extract the info I need, and this is an impediment.

1 Like

Ah! Very familiar with AML/KYC, having spent many years in fintech as an early employee at Monzo.

Definitely agree that would be frustrating. Unfortunately, since we are bound by the compliance of our partners, as it stands there’s only so much we can can do on this. If it’s coming up for us though, that means there’s a strong chance similar feedback is coming up directly to our partners like OpenAI, and I think over time these sorts of things will get resolved.

3 Likes

Personally I agree with this notion. Much rather have AI that is too moral than not moral enough!

2 Likes

I am sure I am taking a simplistic approach. I know nothing about your field of work. But, I did ask R1 to provide me with research articles that are specific to money laundering. After that I asked which of these articles mention the various methods that may be used for money laundering and I received a list of methods that an analyst should keep track of.
Although not truly accurate all the time, I have had an easier time with R1 and trying to ‘trick’ or convince it to give me information it originally said it would not compared to other AI agents.
Again, I know I am oversimplifying your request and concerns. But, just in case you have not tried convincing it to give you the information for research, give it a try. Also, try activating BETA Rabbit mode.

+1 for this to be addressed.
Magic Camera “runs out of ink” when it deems the object being questionable.
I was well prepared and expected security breaches straight out of the gate, and this was the selling point for me. I love custom ROM/Firmware.
So, I’m not taking pictures of private parts, share sensitive login details or connect the hole to anything that potentially could be exploited.
What I have done is taking pictures on my computer screen, various things such as sex drugs and rock ‘n’ roll to art, violence and John Holmes.
The ink runs out over Michelangelo’s David, but Adolf Hitler turns Asian.
It refuses to discuss anal bleaching but can describe the heck out of the horrors of Jeffrey Dahmer and BTK.

I’m glad were having this discussion. In Sweden there was a case where an expert in Japanese art got dragged to court for child porn, because he also had a vast library of manga.
You probably know where this is going, but I don’t agree that you should go to jail because you draw a curvy stick figure and say “she’s eight”.
But sure, there’s nuances, what if you’re really-really good at drawing…?
You can paint victim less bloody murder all day, but not Hentai?

2 Likes