Returning both OpenAI Stream and TTS Elevenlabs stream back
I am calling my endpoint with a text message. This message is used as an input for the OpenAI Assistant (Stream Response) node. The resulting stream is then fed into the Stream TTS Audio Node.
How do I return both the OpenAI and the Elevenlabs stream back to my frontend?
4 Replies
AI Support Bot Information
<@112676328783802368> you can react on the relevant answer (message) with a ✅ in this thread when you think it has been solved, by the bot or by a human!
Anyone can react on this message with a ❌ if the GPT bot is unhelpful or hallucinating answers.
Please note: Team members will review and answer the questions on best efforts basis.
cc @Luis, could you please share the frontend repo link used in this streaming video - https://www.youtube.com/watch?v=FuPffezkYio it would be helpful for @Brian to check out.
BuildShip
YouTube
Streaming Response from OpenAI API with No Code
Exploring how you can creating a streaming response for OpenAI Chat Completions API with BuildShip. Without using code, you will be able to build an AI workflow that can interact with and get API response in streaming mode and for use in your apps.
Get started for free 👉 https://buildship.com
BuildShip is low-code Visual Backend Builder and AI ...
Sure, sharing repo link here: https://github.com/rodgetech/simple-oai-streaming-demo
GitHub
GitHub - rodgetech/simple-oai-streaming-demo
Contribute to rodgetech/simple-oai-streaming-demo development by creating an account on GitHub.
Hi there,
sorry I haven't had much time to check this out after I implemented it differently (by capturing the text stream in the frontend and sending those chunks to elevenlabs in another workflow). Though I don't think you answered my initial question. I cannot find anything related in the repo and the video that talks about sending two streams back (openAI and Elevenlabs). They say I have to set the stream in the "value" of the return node, but how do I do that with two streams that exist in my workflow?