Scheduling State


The life of a computation with Dask can be described in the following stages:

  1. The user authors a graph using some library, perhaps dask.delayed or dask.dataframe or the submit/map functions on the client. They submit these tasks to the scheduler.
  2. The schedulers assimilates these tasks into its graph of all tasks to track, and as their dependencies become available it asks workers to run each of these tasks in turn.
  3. The worker receives information about how to run the task, communicates with its peer workers to collect data dependencies, and then runs the relevant function on the appropriate data. It reports back to the scheduler that it has finished, keeping the result stored in the worker where it was computed.
  4. The scheduler reports back to the user that the task has completed. If the user desires, it then fetches the data from the worker through the scheduler.

Most relevant logic is in tracking tasks as they evolve from newly submitted, to waiting for dependencies, to actively running on some worker, to finished in memory, to garbage collected. Tracking this process, and tracking all effects that this task has on other tasks that might depend on it, is the majority of the complexity of the dynamic task scheduler. This section describes the system used to perform this tracking.

For more abstract information about the policies used by the scheduler, see Scheduling Policies.

The scheduler keeps internal state about several kinds of entities:

  • Individual tasks known to the scheduler
  • Workers connected to the scheduler
  • Clients connected to the scheduler


Everything listed in this page is an internal detail of how Dask operates. It may change between versions and you should probably avoid relying on it in user code (including on any APIs explained here).

Task State

Internally, the scheduler moves tasks between a fixed set of states, notably released, waiting, no-worker, processing, memory, error.

Tasks flow along the following states with the following allowed transitions:

Dask scheduler task states
  • Released: Known but not actively computing or in memory
  • Waiting: On track to be computed, waiting on dependencies to arrive in memory
  • No-worker: Ready to be computed, but no appropriate worker exists (for example because of resource restrictions, or because no worker is connected at all).
  • Processing: Actively being computed by one or more workers
  • Memory: In memory on one or more workers
  • Erred: Task computation, or one of its dependencies, has encountered an error
  • Forgotten (not actually a state): Task is no longer needed by any client or dependent task

In addition to the literal state, though, other information needs to be kept and updated about each task. Individual task state is stored in an object named TaskState and consists of the following information:

class distributed.scheduler.TaskState(key, run_spec)[source]

A simple object holding information about a task.

key: str

The key is the unique identifier of a task, generally formed from the name of the function, followed by a hash of the function and arguments, like 'inc-ab31c010444977004d656610d2d421ec'.

prefix: str

The key prefix, used in certain calculations to get an estimate of the task’s duration based on the duration of other tasks in the same “family” (for example 'inc').

run_spec: object

A specification of how to run the task. The type and meaning of this value is opaque to the scheduler, as it is only interpreted by the worker to which the task is sent for executing.

As a special case, this attribute may also be None, in which case the task is “pure data” (such as, for example, a piece of data loaded in the scheduler using Client.scatter()). A “pure data” task cannot be computed again if its value is lost.

priority: tuple

The priority provides each task with a relative ranking which is used to break ties when many tasks are being considered for execution.

This ranking is generally a 2-item tuple. The first (and dominant) item corresponds to when it was submitted. Generally, earlier tasks take precedence. The second item is determined by the client, and is a way to prioritize tasks within a large graph that may be important, such as if they are on the critical path, or good to run in order to release many dependencies. This is explained further in Scheduling Policy.

state: str

This task’s current state. Valid states include released, waiting, no-worker, processing, memory, erred and forgotten. If it is forgotten, the task isn’t stored in the tasks dictionary anymore and will probably disappear soon from memory.

dependencies: {TaskState}

The set of tasks this task depends on for proper execution. Only tasks still alive are listed in this set. If, for whatever reason, this task also depends on a forgotten task, the has_lost_dependencies flag is set.

A task can only be executed once all its dependencies have already been successfully executed and have their result stored on at least one worker. This is tracked by progressively draining the waiting_on set.

dependents: {TaskState}

The set of tasks which depend on this task. Only tasks still alive are listed in this set.

This is the reverse mapping of dependencies.

has_lost_dependencies: bool

Whether any of the dependencies of this task has been forgotten. For memory consumption reasons, forgotten tasks are not kept in memory even though they may have dependent tasks. When a task is forgotten, therefore, each of its dependents has their has_lost_dependencies attribute set to True.

If has_lost_dependencies is true, this task cannot go into the “processing” state anymore.

waiting_on: {TaskState}

The set of tasks this task is waiting on before it can be executed. This is always a subset of dependencies. Each time one of the dependencies has finished processing, it is removed from the waiting_on set.

Once waiting_on becomes empty, this task can move from the “waiting” state to the “processing” state (unless one of the dependencies errored out, in which case this task is instead marked “erred”).

waiters: {TaskState}

The set of tasks which need this task to remain alive. This is always a subset of dependents. Each time one of the dependents has finished processing, it is removed from the waiters set.

Once both waiters and who_wants become empty, this task can be released (if it has a non-empty run_spec) or forgotten (otherwise) by the scheduler, and by any workers in who_has.


Counter-intuitively, waiting_on and waiters are not reverse mappings of each other.

who_wants: {ClientState}

The set of clients who want this task’s result to remain alive. This is the reverse mapping of ClientState.wants_what.

When a client submits a graph to the scheduler it also specifies which output tasks it desires, such that their results are not released from memory.

Once a task has finished executing (i.e. moves into the “memory” or “erred” state), the clients in who_wants are notified.

Once both waiters and who_wants become empty, this task can be released (if it has a non-empty run_spec) or forgotten (otherwise) by the scheduler, and by any workers in who_has.

who_has: {WorkerState}

The set of workers who have this task’s result in memory. It is non-empty iff the task is in the “memory” state. There can be more than one worker in this set if, for example, Client.scatter() or Client.replicate() was used.

This is the reverse mapping of WorkerState.has_what.

processing_on: WorkerState (or None)

If this task is in the “processing” state, which worker is currently processing it. Otherwise this is None.

This attribute is kept in sync with WorkerState.processing.

retries: int

The number of times this task can automatically be retried in case of failure. If a task fails executing (the worker returns with an error), its retries attribute is checked. If it is equal to 0, the task is marked “erred”. If it is greater than 0, the retries attribute is decremented and execution is attempted again.

nbytes: int (or None)

The number of bytes, as determined by sizeof, of the result of a finished task. This number is used for diagnostics and to help prioritize work.

exception: object

If this task failed executing, the exception object is stored here. Otherwise this is None.

traceback: object

If this task failed executing, the traceback object is stored here. Otherwise this is None.

exception_blame: TaskState (or None)

If this task or one of its dependencies failed executing, the failed task is stored here (possibly itself). Otherwise this is None.

suspicious: int

The number of times this task has been involved in a worker death.

Some tasks may cause workers to die (such as calling os._exit(0)). When a worker dies, all of the tasks on that worker are reassigned to others. This combination of behaviors can cause a bad task to catastrophically destroy all workers on the cluster, one after another. Whenever a worker dies, we mark each task currently processing on that worker (as recorded by WorkerState.processing) as suspicious.

If a task is involved in three deaths (or some other fixed constant) then we mark the task as erred.

host_restrictions: {hostnames}

A set of hostnames where this task can be run (or None if empty). Usually this is empty unless the task has been specifically restricted to only run on certain hosts. A hostname may correspond to one or several connected workers.

worker_restrictions: {worker addresses}

A set of complete worker addresses where this can be run (or None if empty). Usually this is empty unless the task has been specifically restricted to only run on certain workers.

Note this is tracking worker addresses, not worker states, since the specific workers may not be connected at this time.

resource_restrictions: {resource: quantity}

Resources required by this task, such as {'gpu': 1} or {'memory': 1e9} (or None if empty). These are user-defined names and are matched against the contents of each WorkerState.resources dictionary.

loose_restrictions: bool

If False, each of host_restrictions, worker_restrictions and resource_restrictions is a hard constraint: if no worker is available satisfying those restrictions, the task cannot go into the “processing” state and will instead go into the “no-worker” state.

If True, the above restrictions are mere preferences: if no worker is available satisfying those restrictions, the task can still go into the “processing” state and be sent for execution to another connected worker.

The scheduler keeps track of all the TaskState objects (those not in the “forgotten” state) using several containers:

tasks: {str: TaskState}

A dictionary mapping task keys (usually strings) to TaskState objects. Task keys are how information about tasks is communicated between the scheduler and clients, or the scheduler and workers; this dictionary is then used to find the corresponding TaskState object.

unrunnable: {TaskState}

A set of TaskState objects in the “no-worker” state. These tasks already have all their dependencies satisfied (their waiting_on set is empty), and are waiting for an appropriate worker to join the network before computing.

Worker State

Each worker’s current state is stored in a WorkerState object. This information is involved in deciding which worker to run a task on.

class distributed.scheduler.WorkerState(worker, ncores)[source]

A simple object holding information about a worker.


This worker’s unique key. This can be its connected address (such as 'tcp://') or an alias (such as 'alice').

processing: {TaskState: cost}

A dictionary of tasks that have been submitted to this worker. Each task state is asssociated with the expected cost in seconds of running that task, summing both the task’s expected computation time and the expected communication time of its result.

Multiple tasks may be submitted to a worker in advance and the worker will run them eventually, depending on its execution resources (but see Work Stealing).

All the tasks here are in the “processing” state.

This attribute is kept in sync with TaskState.processing_on.

has_what: {TaskState}

The set of tasks which currently reside on this worker. All the tasks here are in the “memory” state.

This is the reverse mapping of TaskState.who_has.

nbytes: int

The total memory size, in bytes, used by the tasks this worker holds in memory (i.e. the tasks in this worker’s has_what).

ncores: int

The number of CPU cores made available on this worker.

resources: {str: Number}

The available resources on this worker like {'gpu': 2}. These are abstract quantities that constrain certain tasks from running at the same time on this worker.

used_resources: {str: Number}

The sum of each resource used by all tasks allocated to this worker. The numbers in this dictionary can only be less or equal than those in this worker’s resources.

occupancy: Number

The total expected runtime, in seconds, of all tasks currently processing on this worker. This is the sum of all the costs in this worker’s processing dictionary.

In addition to individual worker state, the scheduler maintains two containers to help with scheduling tasks:

Scheduler.saturated: {WorkerState}

A set of workers whose computing power (as measured by WorkerState.ncores) is fully exploited by processing tasks, and whose current occupancy is a lot greater than the average.

Scheduler.idle: {WorkerState}

A set of workers whose computing power is not fully exploited. These workers are assumed to be able to start computing new tasks immediately.

These two sets are disjoint. Also, some workers may be neither “idle” nor “saturated”. “Idle” workers will be preferred when deciding a suitable worker to run a new task on. Conversely, “saturated” workers may see their workload lightened through Work Stealing.

Client State

Information about each individual client is kept in a ClientState object:

class distributed.scheduler.ClientState(client)[source]

A simple object holding information about a client.

client_key: str

A unique identifier for this client. This is generally an opaque string generated by the client itself.

wants_what: {TaskState}

A set of tasks this client wants kept in memory, so that it can download its result when desired. This is the reverse mapping of TaskState.who_wants.

Tasks are typically removed from this set when the corresponding object in the client’s space (for example a Future or a Dask collection) gets garbage-collected.

Understanding a Task’s Flow

As seen above, there are numerous pieces of information pertaining to task and worker state, and some of them can be computed, updated or removed during a task’s transitions.

The table below shows which state variable a task is in, depending on the task’s state. Cells with a check mark () indicate the task key must be present in the given state variable; cells with an question mark (?) indicate the task key may be present in the given state variable.

State variable Released Waiting No-worker Processing Memory Erred
TaskState.host_restrictions ? ? ? ? ? ?
TaskState.worker_restrictions ? ? ? ? ? ?
TaskState.resource_restrictions ? ? ? ? ? ?
TaskState.loose_restrictions ? ? ? ? ? ?
TaskState.nbytes (1) ? ? ? ? ?
TaskState.exception (2)           ?
TaskState.traceback (2)           ?
TaskState.retries ? ? ? ? ? ?
TaskState.suspicious_tasks ? ? ? ? ? ?


  1. TaskState.nbytes: this attribute can be known as long as a task has already been computed, even if it has been later released.
  2. TaskState.exception and TaskState.traceback should be looked up on the TaskState.exception_blame task.

The table below shows which worker state variables are updated on each task state transition.

Transition Affected worker state
released → waiting occupancy, idle, saturated
waiting → processing occupancy, idle, saturated, used_resources
waiting → memory idle, saturated, nbytes
processing → memory occupancy, idle, saturated, used_resources, nbytes
processing → erred occupancy, idle, saturated, used_resources
processing → released occupancy, idle, saturated, used_resources
memory → released nbytes
memory → forgotten nbytes


Another way of understanding this table is to observe that entering or exiting a specific task state updates a well-defined set of worker state variables. For example, entering and exiting the “memory” state updates WorkerState.nbytes.


Every transition between states is a separate method in the scheduler. These task transition functions are prefixed with transition and then have the name of the start and finish task state like the following.

def transition_released_waiting(self, key):

def transition_processing_memory(self, key):

def transition_processing_erred(self, key):

These functions each have three effects.

  1. They perform the necessary transformations on the scheduler state (the 20 dicts/lists/sets) to move one key between states.
  2. They return a dictionary of recommended {key: state} transitions to enact directly afterwards on other keys. For example after we transition a key into memory we may find that many waiting keys are now ready to transition from waiting to a ready state.
  3. Optionally they include a set of validation checks that can be turned on for testing.

Rather than call these functions directly we call the central function transition:

def transition(self, key, final_state):
    """ Transition key to the suggested state """

This transition function finds the appropriate path from the current to the final state. It also serves as a central point for logging and diagnostics.

Often we want to enact several transitions at once or want to continually respond to new transitions recommended by initial transitions until we reach a steady state. For that we use the transitions function (note the plural s).

def transitions(self, recommendations):
    recommendations = recommendations.copy()
    while recommendations:
        key, finish = recommendations.popitem()
        new = self.transition(key, finish)

This function runs transition, takes the recommendations and runs them as well, repeating until no further task-transitions are recommended.


Transitions occur from stimuli, which are state-changing messages to the scheduler from workers or clients. The scheduler responds to the following stimuli:

  • Workers
    • Task finished: A task has completed on a worker and is now in memory
    • Task erred: A task ran and erred on a worker
    • Task missing data: A task tried to run but was unable to find necessary data on other workers
    • Worker added: A new worker was added to the network
    • Worker removed: An existing worker left the network
  • Clients
    • Update graph: The client sends more tasks to the scheduler
    • Release keys: The client no longer desires the result of certain keys

Stimuli functions are prepended with the text stimulus, and take a variety of keyword arguments from the message as in the following examples:

def stimulus_task_finished(self, key=None, worker=None, nbytes=None,
                           type=None, compute_start=None, compute_stop=None,
                           transfer_start=None, transfer_stop=None):

def stimulus_task_erred(self, key=None, worker=None,
                        exception=None, traceback=None)

These functions change some non-essential administrative state and then call transition functions.

Note that there are several other non-state-changing messages that we receive from the workers and clients, such as messages requesting information about the current state of the scheduler. These are not considered stimuli.


class distributed.scheduler.Scheduler(center=None, loop=None, delete_interval=500, synchronize_worker_interval=60000, services=None, allowed_failures=3, extensions=None, validate=False, scheduler_file=None, security=None, **kwargs)[source]

Dynamic distributed task scheduler

The scheduler tracks the current state of workers, data, and computations. The scheduler listens for events and responds by controlling workers appropriately. It continuously tries to use the workers to execute an ever growing dask graph.

All events are handled quickly, in linear time with respect to their input (which is often of constant size) and generally within a millisecond. To accomplish this the scheduler tracks a lot of state. Every operation maintains the consistency of this state.

The scheduler communicates with the outside world through Comm objects. It maintains a consistent and valid view of the world even when listening to several clients at once.

A Scheduler is typically started either with the dask-scheduler executable:

$ dask-scheduler
Scheduler started at

Or within a LocalCluster a Client starts up without connection information:

>>> c = Client()  
>>> c.cluster.scheduler  

Users typically do not interact with the scheduler directly but rather with the client object Client.


The scheduler contains the following state variables. Each variable is listed along with what it stores and a brief description.

  • tasks: {task key: TaskState}
    Tasks currently known to the scheduler
  • unrunnable: {TaskState}
    Tasks in the “no-worker” state
  • workers: {worker key: WorkerState}
    Workers currently connected to the scheduler
  • idle: {WorkerState}:
    Set of workers that are not fully utilized
  • saturated: {WorkerState}:
    Set of workers that are not over-utilized
  • worker_info: {worker: {str: data}}:
    Information about each worker
  • host_info: {hostname: dict}:
    Information about each worker host
  • clients: {client key: ClientState}
    Workers currently connected to the scheduler
  • services: {str: port}:
    Other services running on this scheduler, like Bokeh
  • loop: IOLoop:
    The running Tornado IOLoop
  • client_comms: {client key: Comm}
    For each client, a Comm object used to receive task requests and report task status updates.
  • worker_comms: {worker key: Comm}
    For each worker, a Comm object from which we both accept stimuli and report results
  • task_duration: {key-prefix: time}
    Time we expect certain functions to take, e.g. {'sum': 0.25}
  • coroutines: [Futures]:
    A list of active futures that control operation
add_client(*args, **kwargs)[source]

Add client to network

We listen to all future messages from this Comm.

add_keys(comm=None, worker=None, keys=())[source]

Learn that a worker has certain keys

This should not be used in practice and is mostly here for legacy reasons. However, it is sent by workers from time to time.


Add external plugin to scheduler


add_worker(comm=None, address=None, keys=(), ncores=None, name=None, resolve_address=True, nbytes=None, now=None, resources=None, host_info=None, **info)[source]

Add a new worker to the cluster

broadcast(*args, **kwargs)[source]

Broadcast message to workers, return all results

cancel_key(key, client, retries=5, force=False)[source]

Cancel a particular key and all dependents

cleanup(*args, **kwargs)[source]

Clean up queues and coroutines, prepare to stop

client_releases_keys(keys=None, client=None)[source]

Remove keys from client desired list

close(*args, **kwargs)[source]

Send cleanup signal to all coroutines then wait until finished


Close all active Comms.

close_worker(*args, **kwargs)[source]

Remove a worker from the cluster

This both removes the worker from our local state and also sends a signal to the worker to shut down. This works regardless of whether or not the worker has a nanny process restarting it

coerce_address(addr, resolve=True)[source]

Coerce possible input addresses to canonical form. resolve can be disabled for testing with fake hostnames.

Handles strings, tuples, or aliases.


Coerce the hostname of a worker.


Decide on a worker for task ts. Return a WorkerState.

feed(*args, **kwargs)[source]

Provides a data Comm to external requester

Caution: this runs arbitrary Python code on the scheduler. This should eventually be phased out. It is mostly used by diagnostics.

finished(*args, **kwargs)[source]

Wait until all coroutines have ceased

gather(*args, **kwargs)[source]

Collect data in from workers

get_comm_cost(ts, ws)[source]

Get the estimated communication cost (in s.) to compute the task on the given worker.

get_task_duration(ts, default=0.5)[source]

Get the estimated computation cost of the given task (not including any communication cost).


Basic information about ourselves and our cluster

get_worker_service_addr(worker, service_name)[source]

Get the (host, port) address of the named service on the worker. Returns None if the service doesn’t exist.

handle_client(*args, **kwargs)[source]

Listen and respond to messages from clients

This runs once per Client Comm or Queue.

See also

The equivalent function for workers
handle_long_running(key=None, worker=None, compute_duration=None)[source]

A task has seceded from the thread pool

We stop the task from being stolen in the future, and change task duration accounting as if the task has stopped.

handle_worker(*args, **kwargs)[source]

Listen to responses from a single worker

This is the main loop for scheduler-worker interaction

See also

Equivalent coroutine for clients

Basic information about ourselves and our cluster

rebalance(*args, **kwargs)[source]

Rebalance keys so that each worker stores roughly equal bytes


This orders the workers by what fraction of bytes of the existing keys they have. It walks down this list from most-to-least. At each worker it sends the largest results it can find and sends them to the least occupied worker until either the sender or the recipient are at the average expected load.

reevaluate_occupancy(*args, **kwargs)[source]

Periodically reassess task duration time

The expected duration of a task can change over time. Unfortunately we don’t have a good constant-time way to propagate the effects of these changes out to the summaries that they affect, like the total expected runtime of each of the workers, or what tasks are stealable.

In this coroutine we walk through all of the workers and re-align their estimates with the current state of tasks. We do this periodically rather than at every transition, and we only do it if the scheduler process isn’t under load (using psutil.Process.cpu_percent()). This lets us avoid this fringe optimization when we have better things to think about.


Remove client from network


Remove external plugin from scheduler

remove_worker(comm=None, address=None, safe=False, close=True)[source]

Remove worker from cluster

We do this when a worker reports that it plans to leave or when it appears to be unresponsive. This may send its tasks back to a released state.

replicate(*args, **kwargs)[source]

Replicate data throughout cluster

This performs a tree copy of the data throughout the network individually on each piece of data.


keys: Iterable

list of keys to replicate

n: int

Number of replications we expect to see within the cluster

branching_factor: int, optional

The number of workers that can copy data in each generation. The larger the branching factor, the more data we copy in a single step, but the more a given worker risks being swamped by data requests.

report(msg, ts=None, client=None)[source]

Publish updates to all listening Queues and Comms

If the message contains a key then we only send the message to those comms that care about the key.

reschedule(key=None, worker=None)[source]

Reschedule a task

Things may have shifted and this task may now be better suited to run elsewhere

restart(*args, **kwargs)[source]

Restart all workers. Reset local state.

run_function(stream, function, args=(), kwargs={})[source]

Run a function within this process

See also


scatter(*args, **kwargs)[source]

Send data out to workers

send_task_to_worker(worker, key)[source]

Send a single computational task to a worker

start(addr_or_port=8786, start_queues=True)[source]

Clear out old state and restart all running coroutines


Start an IPython kernel

Returns Jupyter connection info dictionary.

stimulus_cancel(comm, keys=None, client=None, force=False)[source]

Stop execution on a list of keys

stimulus_missing_data(cause=None, key=None, worker=None, ensure=True, **kwargs)[source]

Mark that certain keys have gone missing. Recover.

stimulus_task_erred(key=None, worker=None, exception=None, traceback=None, **kwargs)[source]

Mark that a task has erred on a particular worker

stimulus_task_finished(key=None, worker=None, **kwargs)[source]

Mark that a task has finished execution on a particular worker


Get all transitions that touch one of the input keys

transition(key, finish, *args, **kwargs)[source]

Transition a key from its current state to the finish state

Returns:Dictionary of recommendations for future transitions

See also

transitive version of this function


>>> self.transition('x', 'waiting')
{'x': 'processing'}

Get all transitions that touch one of the input keys


Process transitions until none are left

This includes feedback from previous transitions and continues until we reach a steady state

update_data(comm=None, who_has=None, nbytes=None, client=None)[source]

Learn that new data has entered the network from an external source

See also


update_graph(client=None, tasks=None, keys=None, dependencies=None, restrictions=None, priority=None, loose_restrictions=None, resources=None, submitting_task=None, retries=None)[source]

Add new computations to the internal dask graph

This happens whenever the Client calls submit, map, get, or compute.


Return set of currently valid workers for key

If all workers are valid then this returns True. This checks tracks the following state:

  • worker_restrictions
  • host_restrictions
  • resource_restrictions
worker_objective(ts, ws)[source]

Objective function to determine which worker should get the task

Minimize expected start time. If a tie then break with data storage.

worker_send(worker, msg)[source]

Send message to worker

This also handles connection failures by adding a callback to remove the worker on the next cycle.


List of qualifying workers

Takes a list of worker addresses or hostnames. Returns a list of all worker addresses that match


Find workers that we can close with low cost

This returns a list of workers that are good candidates to retire. These workers are not running anything and are storing relatively little data relative to their peers. If all workers are idle then we still maintain enough workers to have enough RAM to store our data, with a comfortable buffer.

This is for use with systems like distributed.deploy.adaptive.


memory_factor: Number

Amount of extra space we want to have for our stored data. Defaults two 2, or that we want to have twice as much memory as we currently have data.


to_close: list of workers that are OK to close

distributed.scheduler.decide_worker(ts, all_workers, valid_workers, objective)[source]

Decide which worker should take task ts.

We choose the worker that has the data on which ts depends.

If several workers have dependencies then we choose the less-busy worker.

Optionally provide valid_workers of where jobs are allowed to occur (if all workers are allowed to take the task, pass True instead).

If the task requires data communication because no eligible worker has all the dependencies already, then we choose to minimize the number of bytes sent between workers. This is determined by calling the objective function.