How does cold start work?
Checkout my attached image. the purple bar is the execution time from a BuildShip workflow from start to end (I record the timings within the workflow itself). The green bars are measurements from the frontend you can disregard (though the BuildShip latency adds to this frontend latency). Is a workflow ALWAYS starting cold or does it start cold after a certain period of time if the workflow was not being used? Furthermore I wonder why the workflow itself runs slower at the beginning and faster in the end in my statistics. Since I am measuring the times directly in the workflow I thought the cold start would not affect the speed of the workflow but rather the starting time.

12 Replies
AI Support Bot Information
<@1227983829856157809> you can react on the relevant answer (message) with a ✅ in this thread when you think it has been solved, by the bot or by a human!
Anyone can react on this message with a ❌ if the GPT bot is unhelpful or hallucinating answers.
Please note: Team members will review and answer the questions on best efforts basis.
AI Response
When it comes to the BuildShip workflows, a "cold start" occurs in the free and starter plans, meaning that if you're making your first API request or a request after a significant break since the last one, the response time can be longer compared to immediate, consecutive requests【4:0†source】.
Regarding why the workflow seems to run slower at the beginning and faster at the end, this could be related to the initial setup or environment loading times, which are characteristic of cold starts. Once this setup phase is complete and consecutive requests are handled, the executions tend to proceed faster until the environment goes idle again【4:0†source】.
If you're encountering performance issues due to cold starts and if you're on a paid plan, you might have options like "no cold start" on certain plans which can help mitigate this problem【4:0†source】.
Unfortunately, these factors could influence overall latency and are something to consider when measuring execution times from within the workflow itself. If you need more specific optimizations, reaching out through in-app support for detailed assistance is recommended.
BuildShip AI Support
@Gaurav Chadha We just started the 7 days free trial period of the "no cold start" addon. Nevertheless we can still measure slower initial workflow runs. Is this normal? (It is better but the initial workflow requests are still slower than the rest)
Hi @Brian were you able check the performance metrics for request latency comparision before and after purchase? This should give you better insights for no cold start.
you can an expect about the first response time between 5 to ~10 seconds, with reduced latency.

Since this is a hourly based graph that measures every single workflow I cannot really see anything significant there. I also did not check this before we activated it. Though what do you mean with "you can an expect about the first response time between 5 to ~10 seconds, with reduced latency." -> Does this mean that even if "No cold start" is enabled, it will still take longer for a workflow to boot up the first time (after a certain idle time) and take 5 to ~10 seconds?
nope, this means it should take less time than before. example if startup/boot up request was taking 20s before it should be around 10s - 12s.
Hi @Brian, it helps to clarify what steps are taken to execute a flow
1 - boots up server to host a project
2- loading in flows in to the server memory
3- running the flow code
with no cold start enable we always have a server instance running with flows loaded in memory
I have suspcision that while step one is done correctly your flows are not being loaded in memory, if you can dm you project Id, I can take a closer look
@Brian can you please try running the test again, see if you notice any improvements
I still do not get this. What are these number supposed to mean? Are these 10s - 12s a random number from the flow running through steps 1-3 what @Shams listed? These numbers have no meaning for me then, because step 3 varies on the workflow's structure so much.
Not sure here. The first workflow (darker purple bars on the left) seems to have a quite similar run time now. (Remember: My bars solely measure the workflow's run time from the beginning to start, not including step 1 and 2). The second workflow (with the brighter purple bars on the right) have again a super huge spike on the very first workflow execution. I checked the workflow itself. It seems to run super slow. It makes a lot of "simple" executions at takes several seconds for some. I do have a groq API call in there so I also checked if it is due to a groq cold start, but groq apparently does not have cold starts and also the groq chat node executes within a second.
I just want to know if the slow first workflow run is completely removed with this "no cold start" add-on or if it is just less. Because if it is just "less" then it is still a cold start to me. The server should always perform the same if we pay for it, not just perform a little better on the first workflow run.

Hi @Brian can you please share the link of this flow via in-app support? You can attach this discord conversation link in message for reference.
Sent it: #722461
Hey @Brian We checked the workflow you shared for issues with latency even after no-cold start purchase, the initial request.
According to the 22 execution logs and metrics, all the initial startup requests are under 2-3ms (you can check the request time to call the trigger) only one request - 2025-02-2620:09:32 which took 4.01s also worked without any cold start under 2ms, the time is consumed by the Groq Chat (detailed) - 2.29s and Get Document Query node - 1.48.
The server performance for no cold start is consistent with the initial request without startup latency for requests within 2ms.
Also sent you an email.