The R1 already does this using Vision, but the times I’ve used it, it hasn’t worked very well (very poor translation). They should stop using ChatGPT 3.5, which is quite useless compared to the current models.
I noticed that if you take a picture in vision and ask to translate it, it will tell you that translation mode is not possible. instead if you ask it: what is in this picture, or what do you see, it will tell you there is a msg in “x language” and then translate it.
exactly! this is one of the things that I would suggest is necessary when thinking about translation and considering rabbit as a assistant when going to other countries.