Multi-threading

ROOT can benefit from multi-threading by being able to do work in parallel. If parallelism is managed internally by ROOT, we talk about implicit multi-threading. On the other hand, if the application itself is in charge of creating threads and distributing work among them, the multi-threading is explicit. Below we describe how to use ROOT in either case.

Implicit multi-threading

Some parts of ROOT are able to spread work over multiple threads in order to exploit the multiple cores of a machine. This happens under the hood, and you benefit from the speedup “for free”.

By default, ROOT runs in sequential mode; in order to activate implicit multi-threading (IMT), one must call ROOT::EnableImplicitMT(). ROOT::DisableImplicitMT() disables IMT, i.e. it goes back to sequential mode.

The most prominent IMT example in ROOT is RDataFrame, which is able to automatically run its computation on all cores:

// This activates implicit multi-threading
ROOT::EnableImplicitMT();

// The analysis below runs in parallel
ROOT::RDataFrame rdf("mytree", "myfile.root");
auto h = rdf.Filter("x > 0").Histo1D("x");
h->Draw();
import ROOT

# This activates implicit multi-threading
ROOT.EnableImplicitMT()

# The analysis below runs in parallel
rdf = ROOT.RDataFrame("mytree", "myfile.root")
h = rdf.Filter("x > 0").Histo1D("x")
h.Draw()

For further information about RDataFrame, please visit → RDataFrame manual.

In addition to RDataFrame, the methods and classes below also implement IMT in ROOT:

  • TTree::GetEntry(): Reads and decompresses multiple branches in parallel.

  • TTree::Fill(): to fill and compress the branches of a tree, possibly flushing their content to disk.

  • TH1::Fit(): Performs in parallel the evaluation of the objective function over the data.

  • TMVA::DNN: Trains a deep neural network in parallel.

  • TMVA::BDT(): Trains a classifier in parallel and evaluates multi-class BDTs in parallel.

Explicit multi-threading

You can also use ROOT classes in an application that explicitly manages multiple threads. In such a scenario, you need to be aware of the thread safety level that each interface supports. Below there are some instructions depending on that level:

  • Thread unsafe: make sure that any usage of thread unsafe objects is serialized, for instance through locks. You should not use thread unsafe objects concurrently by multiple threads even if every thread uses its own copy of an object and even if the threads use objects of different (thread unsafe) types, as they may directly or indirectly share state in a thread unsafe manner.

  • Conditionally thread safe: you can use conditionally safe objects concurrently by multiple threads as long as the threads do not share the same objects (i.e. every thread uses a local instance of its objects). In addition, you can share the same conditionally safe object among threads as long as all threads use only const methods of the shared object. You can freely use instances of different conditionally safe types concurrently in different threads.

  • Thread safe: you can freely use thread safe objects concurrently in multiple threads.

In multi-threaded applications, you should call ROOT::EnableThreadSafety(). Otherwise you need to consider ROOT objects as being thread unsafe.

With ROOT::EnableThreadSafety(), types whose name starts with an R (e.g. RDataFrame) generally are conditionally thread safe. Most of the core, math and I/O related classes are conditionally thread safe (TTree, TDirectory, TFile, TH*, TMinuit). Most of the general infrastructure classes (e.g. TROOT, TClass) are thread safe.

For more detail see the individual class documentation.