tom - Hi, I'm looking some advice on response t...
Hi, I'm looking some advice on response times for BuildShip, I'm currently using the Starter package and quering data from Firestore and I'm surprised by how low the response times are. Now, to be honest, I haven't looked into the reasons too deeply, so maybe it's not BuildShip and there's some other bottleneck.
I did however noticed the following terms on the pricing cards: "no cold start", "Fastest processing tier". Could someone help me compare the packages in terms of response/compute speeds? "no cold start", "Fastest processing tier" don't necessarily have any relevant meaning, or am just too dumb to understand it π
4 Replies
Hi @tom, You can check workflow logs to know more about the response and execution time for each step initial request to each nodes from cloud logs - https://docs.buildship.com/logging.
In Pro plan (no-cold start) , you can expect to reduce the latency of the first response time between 5 to~10 seconds, but please note that this is only for the first request, the time taken by the individual nodes will depend on their processing. We'll make this more clear on our pricing.
Thank you so much π can you help me understand what does the first mean? It's the absolute first, after creation and first ship? Or is there a moment when the node's lifecycle ends and there's a "second"first?
first here is the response time from the first step in your logs, you can see it when you open and expand one particular log.
I'm sill a bit unclear, can we try to explain it using an example. I have end-points X and Z and users 1 and 2.
User 1 requests end-point X, there's a 5-10 second delay before the workflow triggers? And now, when User 1 requests end-point Y, it's the same 5-10 second delay?
And User 1 hasn't "started" the workflow for User 2? If they request the end-points, there's going to be same delay? And with the lower level plan, you have to add the "cold start" to the delay?