Streaming AI Assistant using BuildShip is very buggy compared to the OpenAi API
Hello everyone,
Im having a little bit of a hard time with the flutterflow template i find it to be pretty buggy with responses. Responses sometimes dont come in and when they do come in they are wrong like it wont remeber the name, also if i answer a multiple line response it wont receive it. Ive included a screen recording of a test from the app and also a screen recording of how it should be on OpenAi API. How can i get it to match the OpenAi response and not like the wonky template response
App Test: https://www.loom.com/share/a48cbf7c024b46649fa4059e3883a4a1?sid=eca21d90-acc1-4cc8-84fe-6bae27b40c96
OpenAi API: https://www.loom.com/share/a3f9627b02ea4e7a92451757d6b5d92b?sid=65e49eab-ca42-4890-829c-977a3f6122da
If someone could help me or guide me in the right direction it would be greatly appreciated
3 Replies
It seems like you're experiencing some issues with the FlutterFlow template and its responses. For specific guidance on how to align the responses with the OpenAI API and address the bugs you're encountering, I recommend submitting your query via the in-app support button in BuildShip. This will allow the support team to provide you with tailored assistance based on your detailed description and screen recordings.
For more information on working with templates and troubleshooting, you can also refer to the BuildShip documentation. If you have further questions or need additional support, the in-app support option is the best way to get personalized help.
cc @Stu, any insights here on FlutterFlow template?
Yes, these issues should hopefully be resolved by the work that @nithinrdy has done on the Assistant Streaming node last week, which I'm yet to get tested and integrated into both templates.