5 Replies
Not yet
shoot im trying to use Hume AI https://dev.hume.ai/docs/empathic-voice-interface-evi/overview
So i guess this isnt possible?
Hume API
Empathic Voice Interface (EVI) — Hume API
Hume's Empathic Voice Interface (EVI) is the world’s first emotionally intelligent voice AI.
I think the idea is that websocket is to be used directly on the frontend and if you need to configure experience you can use buildship to make any sensitive api calls
For example you wouldn’t want set the system prompt from the frontend because the user can read it and even overwrite it
But for the websockets you wouldnt want buildship to be in the middle of Hume and the user, because it would increase the conversation latency
Do you have an app that you want to implement Hume in
Or are you starting a new project?
I have an app i want to implement it into using flutter
I currently have created a speech to speech model using buildship various apis but its not live streaming