Class Adaptive

Adaptively allocate workers based on scheduler load. A superclass.

Declaration

class Adaptive(AdaptiveCore)
source link

Documentation

Contains logic to dynamically resize a Dask cluster based on current use.
This class needs to be paired with a system that can create and destroy
Dask workers using a cluster resource manager.  Typically it is built into
already existing solutions, rather than used directly by users.
It is most commonly used from the ``.adapt(...)`` method of various Dask
cluster classes.

Attributes

Examples

This is commonly used from existing Dask classes, like KubeCluster

>>> from dask_kubernetes import KubeCluster
>>> cluster = KubeCluster()
>>> cluster.adapt(minimum=10, maximum=100)

Alternatively you can use it from your own Cluster class by subclassing
from Dask's Cluster superclass

>>> from distributed.deploy import Cluster
>>> class MyCluster(Cluster):
...     def scale_up(self, n):
...         """ Bring worker count up to n """
...     def scale_down(self, workers):
...        """ Remove worker addresses from cluster """

>>> cluster = MyCluster()
>>> cluster.adapt(minimum=10, maximum=100)

Notes

Subclasses can override :meth:`Adaptive.target` and
:meth:`Adaptive.workers_to_close` to control when the cluster should be
resized. The default implementation checks if there are too many tasks
per worker or too little memory available (see
:meth:`Scheduler.adaptive_target`).
The values for interval, min, max, wait_count and target_duration can be
specified in the dask config under the distributed.adaptive key.

Methods

Inherited methods

Reexports