I remember it being about generative UI in one of the Rabbit keynotes and I am very interested in the concept of it.
Are there some ideas around this for implementation in the foreseeable future?
For example you could have responses with multiple answers and have them in a list that can expand per item. (Rabbit could understand what item is currently open if follow up queries are made)
This ofcourse would also mean that what is on screen would not be read out verbatim.
I agree, I appreciate the way that Perplexity Explains what it’s doing with that information. Rabbit should be passively showing that information on that generative UI.
Wow, congratulations with the update today! I didn’t see the generative ui update yet but i’m updating now. Very exciting to see what Rabbit did! And I remembered this post…
Yes, I have noticed that it seems to truncate or summarise my Perplexity based questions. I was wondering if this was in effort to fit the UI experience it generated. I’m sure that can be fixed, though right? I wonder if adding it into the UI prompt would help.