18from dask.distributed
import LocalCluster, Client
22RDataFrame = ROOT.RDF.Experimental.Distributed.Dask.RDataFrame
25def create_connection():
27 Setup connection to a Dask cluster. Two ingredients are needed:
28 1. Creating a cluster object that represents computing resources. This can be
29 done in various ways depending on the type of resources at disposal. To use
30 only the local machine (e.g. your laptop), a `LocalCluster` object can be
31 used. This step can be skipped if you have access to an existing Dask
32 cluster; in that case, the cluster administrator should provide you with a
33 URL to connect to the cluster in step 2. More options for cluster creation
34 can be found in the Dask docs at
35 http://distributed.dask.org/en/stable/api.html#cluster .
36 2. Creating a Dask client object that connects to the cluster. This accepts
37 directly the object previously created. In case the cluster was setup
38 externally, you need to provide an endpoint URL to the client, e.g.
39 'https://myscheduler.domain:8786'.
41 Through Dask, you can connect to various types of cluster resources. For
42 example, you can connect together a set of machines through SSH and use them
43 to run your computations. This is done through the `SSHCluster` class. For
47 from dask.distributed import SSHCluster
49 # A list with machine host names, the first name will be used as
50 # scheduler, following names will become workers.
51 hosts=["machine1","machine2","machine3"],
52 # A dictionary of options for each worker node, here we set the number
53 # of cores to be used on each node.
54 worker_options={"nprocs":4,},
58 Another common usecase is interfacing Dask to a batch system like HTCondor or
59 Slurm. A separate package called dask-jobqueue (https://jobqueue.dask.org)
60 extends the available Dask cluster classes to enable running Dask computations
61 as batch jobs. In this case, the cluster object usually receives the parameters
62 that would be written in the job description file. For example:
65 from dask_jobqueue import HTCondorCluster
66 cluster = HTCondorCluster(
71 # Use the scale method to send as many jobs as needed
75 In this tutorial, a cluster object is created for the local machine, using
76 multiprocessing (processes=True) on 4 workers (n_workers=4) each using only
77 1 core (threads_per_worker=1).
79 cluster = LocalCluster(n_workers=4, threads_per_worker=1, processes=
True)
80 client = Client(cluster)
87if __name__ ==
"__main__":
90 connection = create_connection()
92 df = RDataFrame(1000, daskclient=connection)
95 ROOT.gRandom.SetSeed(1)
96 df_1 = df.Define(
"gaus",
"gRandom->Gaus(10, 1)").Define(
"exponential",
"gRandom->Exp(10)")
99 h_gaus = df_1.Histo1D((
"gaus",
"Normal distribution", 50, 0, 30),
"gaus")
100 h_exp = df_1.Histo1D((
"exponential",
"Exponential distribution", 50, 0, 30),
"exponential")
103 c = ROOT.TCanvas(
"distrdf002",
"distrdf002", 800, 400)
111 c.SaveAs(
"distrdf002_dask_connection.png")
112 print(
"Saved figure to distrdf002_dask_connection.png")