Cloud Tasks

3. Pitfalls

  • With the exception of tasks scheduled to run in the future, task queues are completely agnostic about execution order. 
  • There are no guarantees or best effort attempts made to execute tasks in any particular order. 
  • There are no guarantees that old tasks will execute unless a queue is completely emptied. 
  • A number of common cases exist where newer tasks are executed sooner than older tasks, and the patterns surrounding this can change without notice.
  • Cloud Tasks aims for a strict "execute exactly once" semantic. 
  • In situations where a design trade-off must be made between guaranteed execution and duplicate execution, the service errs on the side of guaranteed execution. 
  • A non-zero number of duplicate executions do occur. 
  • Developers should take steps to ensure that duplicate execution is not a catastrophic event.
  • In production, more than 99.999% of tasks are executed only once.
  • The most common source of backlogs in immediate processing queues is exhausting resources on the target instances. 
  • If a user is attempting to execute 100 tasks per second on frontend instances that can only process 10 requests per second, a backlog will build. 
  • This typically manifests in one of two ways, either of which can generally be resolved by increasing the number of instances processing requests.
  • Servers that are being overloaded can start to return backoff errors in the form of HTTP response code 503. 
  • Cloud Tasks will react to these errors by slowing down execution until errors stop.
  • This can be observed by looking at the "enforced rate" field in the Cloud Console.
  • Overloaded servers can also respond with large increases in latency. 
  • Requests remain open for longer. 
  • Because queues run with a maximum concurrent number of tasks, this can result in queues being unable to execute tasks at the expected rate. 
  • Increasing the max_concurrent_tasks for the affected queues can help in situations where the value has been set too low, introducing an artificial rate limit. 
  • Increasing max_concurrent_tasks is unlikely to relieve any underlying resource pressure.