This class contains common functionality for Dask Cluster manager classes.
To implement this class, you must provide
1. A ``scheduler_comm`` attribute, which is a connection to the scheduler
following the ``distributed.core.rpc`` API.
2. Implement ``scale``, which takes an integer and scales the cluster to
that many workers, or else set ``_supports_scaling`` to False
For that, you should get the following:
1. A standard ``__repr__``
2. A live IPython widget
3. Adaptive scaling
4. Integration with dask-labextension
5. A ``scheduler_info`` attribute which contains an up-to-date copy of
``Scheduler.identity()``, which is used for much of the above
6. Methods to gather logsFor keyword arguments see dask.distributed.Adaptive
>>> cluster.adapt(minimum=0, maximum=10, interval='500ms')
This method is overriden in:
Whether or not to collect logs for the cluster manager
Whether or not to collect logs for the scheduler
A list of worker addresses to select. Defaults to all workers if `True` or no workers if `False`
A dictionary of logs, with one item for the scheduler and one for each worker
Target number of workers
>>> cluster.scale(10) # scale cluster to ten workers
This method is overriden in: