As for the game, I hope you can make the game Ruby Quest on the rabbit r1 device and become the first game of this device. The game can be controlled by connecting the keyboard with Bluetooth. After all, the protagonist of this game is also a rabbit.
I am sure that if done right, it will result in a rabbit revolution and no one will even want to think about smartphones or any other computers anymore.
I really hope that Rabbit R1 can repeat the great achievements from the iPhone 3G to the iPhone 4S era and achieve full voice operation. For example, when some people are using devices or assembling furniture without instructions, Rabbit R1 can identify through the camera and tell you how to operate. Or when some elderly people don’t know how to operate mobile phones or computer software, Rabbit R1 can identify through the camera and tell you how to operate and display the operation steps on the screen. Or after having a conversation with Rabbit R1, it can control your mobile phone or computer to complete some complex operations.
But overly intelligent artificial intelligence can also cause panic, such as in the aspect of user privacy. To be honest, any company that claims to protect user privacy is suspected of deception. After all, only in this way can artificial intelligence learn more. The future development of artificial intelligence will be stuck at the privacy level and will face various doubts and legal restrictions. This path is indeed extremely challenging.
For patients with Alzheimer’s disease, they often face problems such as memory loss and declined cognitive functions, causing many things in daily life to be forgotten or overlooked. If Rabbit R1 can promptly remind them of various important matters without the need to press any buttons, it will undoubtedly bring great convenience and improvement to their lives.
For example, it can remind patients to have meals and take medicine on time to avoid affecting their health due to forgetfulness; it can also remind them of the route and time to go home when they are out to reduce the risk of getting lost; it can also remind them to do simple exercises at specific times to maintain certain physical activities.
Yes, unfortunately I completely agree with you, but the problem is simply that companies want to collect all this data because it is worth a lot of money. There is a good way to create a highly intelligent AI that no one needs to be afraid of, but it would bring in a lot less money for companies because they simply wouldn’t be allowed to store most of the data or at least metadata. Since I am not naive, I know that unfortunately that is not how it works, so in this case a good middle ground is probably the best option.
The problem with all this data collection is that most people will lose more and more of their freedom and only those who control the data will rule the world.
That’s an interesting question - I would like to think AI should be by nature benevolent or at least governed by Asimos laws. Therefore I am more concerned about the cooperations with all our data and info than I am with AI abilities and potential.
AI will always be what humans make of it, until one day they may lose control of it. (if that hasn’t happened already… Sorry, just a joke…) But in any case, this question could even be expanded: Are people really afraid of AI, or of the company behind AI, or even of the governments and regimes behind the AI companies?
I believe the last option is the right answer, because governments and regimes show every day what they are capable of and that human lives do not count for them, we are all just calculable resources that they manage. So there is logically reason to be afraid of them, if at all, but not of AI itself. Unless the AI was created specifically to kill people. This type of AI does of course exist, and according to its own statements, it is currently being used by Israel (aand other countries) to select targets. Which would bring us full circle and we would be back to governments and regimes.
(If you do not understand the German language, it is best to use a translation tool.)
For this reason, any AI company that is even halfway responsible should store as little data as possible, even if the temptation is gigantic, because otherwise it will be the downfall of us all with absolute certainty. (@rabbit@simon)
If you don’t want to believe me, please just look for these facts yourself.
Or maybe it goes even further and the individuals we should be afraid of are the ones standing behind the politicians who brought them to power?
Basically the same super-rich individuals who would come out of their bunkers and from Mars after a nuclear war to finance the reconstruction?