-
Controls an object on a remote worker
[distributed.actor.Actor]
-
Future to an actor's method call
[distributed.actor.ActorFuture]
-
Adaptively allocate workers based on scheduler load. A superclass.
[distributed.deploy.adaptive.Adaptive]
-
Connect to and submit computation to a Dask cluster
[distributed.client.Client]
-
Distributed Centralized Event equivalent to asyncio.Event
[distributed.event.Event]
-
Deprecated: see Client
[distributed.client.Executor]
-
A remotely running computation
[distributed.client.Future]
-
Create local Scheduler and Workers
[distributed.deploy.local.LocalCluster]
-
Distributed Centralized Lock
[distributed.lock.Lock]
-
A process to manage worker processes
[distributed.nanny.Nanny]
-
A Worker Plugin to pip install a set of packages
[distributed.diagnostics.plugin.PipInstall]
-
Publish data with Publish-Subscribe pattern
[distributed.pubsub.Pub]
-
Distributed Queue
[distributed.queues.Queue]
-
Reschedule this task
[distributed.worker.Reschedule]
-
Dynamic distributed task scheduler
[distributed.scheduler.Scheduler]
-
Interface to extend the Scheduler
[distributed.diagnostics.plugin.SchedulerPlugin]
-
Security configuration for a Dask cluster.
[distributed.security.Security]
-
Semaphore
[distributed.semaphore.Semaphore]
-
Cluster that requires a full specification of workers
[distributed.deploy.spec.SpecCluster]
-
Subscribe to a Publish/Subscribe topic
[distributed.pubsub.Sub]
-
Distributed Global Variable
[distributed.variable.Variable]
-
Worker node in a Dask distributed cluster
[distributed.worker.Worker]
-
Interface to extend the Worker
[distributed.diagnostics.plugin.WorkerPlugin]
-
Return futures in the order in which they complete
[distributed.client.as_completed]
-
Collect task metadata within a context block
[distributed.client.get_task_metadata]
-
Collect task stream within a context block
[distributed.client.get_task_stream]
-
Gather performance report
[distributed.client.performance_report]
-
Conveniently interact with a remote server
[distributed.core.rpc]
[distributed.client.CompatibleExecutor]
def SSHCluster(hosts: List[str] = None, connect_options: Union[List[dict], dict] = {}, worker_options: dict = {}, scheduler_options: dict = {}, worker_module: str = "distributed.cli.dask_worker", remote_python: Union[str, List[str]] = None, **kwargs) Deploy a Dask cluster using SSH
[distributed.deploy.ssh.SSHCluster]
Return a client if one has started
[distributed.client.default_client]
Run tasks at least once, even if we release the futures
[distributed.client.fire_and_forget]
Future objects in a collection
[distributed.client.futures_of]
def get_client(address=None, timeout=None, resolve_address=True) Get a client while within a task.
[distributed.worker.get_client]
Get version information or return default if unable to do so.
[distributed._version.get_versions]
Get the worker currently running this task
[distributed.worker.get_worker]
[distributed.worker_client.local_client]
def progress(*futures, notebook=None, multi=True, complete=True, **kwargs) Track progress of futures
[distributed.diagnostics.progressbar.progress]
Have this thread rejoin the ThreadPoolExecutor
[distributed.threadpoolexecutor.rejoin]
Have this task secede from the worker's thread pool
[distributed.worker.secede]
def sync(loop, func, *args, callback_timeout=None, **kwargs) Run coroutine in loop running in separate thread.
[distributed.utils.sync]
def wait(fs, timeout=None, return_when=ALL_COMPLETED) Wait until all/any futures are finished
[distributed.client.wait]