Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
RDataFrame.cxx
Go to the documentation of this file.
1// Author: Enrico Guiraud, Danilo Piparo CERN 12/2016
2
3/*************************************************************************
4 * Copyright (C) 1995-2018, Rene Brun and Fons Rademakers. *
5 * All rights reserved. *
6 * *
7 * For the licensing terms see $ROOTSYS/LICENSE. *
8 * For the list of contributors see $ROOTSYS/README/CREDITS. *
9 *************************************************************************/
10
12#include "ROOT/RDataFrame.hxx"
13#include "ROOT/RDataSource.hxx"
14#include "ROOT/RTTreeDS.hxx"
18#include "ROOT/RDF/Utils.hxx"
19#include <string_view>
20#include "TChain.h"
21#include "TDirectory.h"
22#include "RtypesCore.h" // for ULong64_t
23#include "TTree.h"
24
25#include <fstream> // std::ifstream
26#include <nlohmann/json.hpp> // nlohmann::json::parse
27#include <memory> // for make_shared, allocator, shared_ptr
28#include <ostream> // ostringstream
29#include <stdexcept>
30#include <string>
31#include <vector>
32
33// clang-format off
34/**
35* \class ROOT::RDataFrame
36* \ingroup dataframe
37* \brief ROOT's RDataFrame offers a modern, high-level interface for analysis of data stored in TTree , CSV and other data formats, in C++ or Python.
38
39In addition, multi-threading and other low-level optimisations allow users to exploit all the resources available
40on their machines completely transparently.<br>
41Skip to the [class reference](#reference) or keep reading for the user guide.
42
43In a nutshell:
44~~~{.cpp}
45ROOT::EnableImplicitMT(); // Tell ROOT you want to go parallel
46ROOT::RDataFrame d("myTree", "file_*.root"); // Interface to TTree and TChain
47auto myHisto = d.Histo1D("Branch_A"); // This books the (lazy) filling of a histogram
48myHisto->Draw(); // Event loop is run here, upon first access to a result
49~~~
50
51Calculations are expressed in terms of a type-safe *functional chain of actions and transformations*, RDataFrame takes
52care of their execution. The implementation automatically puts in place several low level optimisations such as
53multi-thread parallelization and caching.
54
55\htmlonly
56<a href="https://doi.org/10.5281/zenodo.260230"><img src="https://zenodo.org/badge/DOI/10.5281/zenodo.260230.svg"
57alt="DOI"></a>
58\endhtmlonly
59
60## For the impatient user
61You can directly see RDataFrame in action in our [tutorials](https://root.cern/doc/master/group__tutorial__dataframe.html), in C++ or Python.
62
63## Table of Contents
64- [Cheat sheet](\ref cheatsheet)
65- [Introduction](\ref rdf_intro)
66- [Crash course](\ref crash-course)
67- [Working with collections](\ref working_with_collections)
68- [Transformations: manipulating data](\ref transformations)
69- [Actions: getting results](\ref actions)
70- [Distributed execution in Python](\ref rdf_distrdf)
71- [Performance tips and parallel execution](\ref parallel-execution)
72- [More features](\ref more-features)
73 - [Systematic variations](\ref systematics)
74 - [RDataFrame objects as function arguments and return values](\ref rnode)
75 - [Storing RDataFrame objects in collections](\ref RDFCollections)
76 - [Executing callbacks every N events](\ref callbacks)
77 - [Default column lists](\ref default-branches)
78 - [Special helper columns: `rdfentry_` and `rdfslot_`](\ref helper-cols)
79 - [Just-in-time compilation: column type inference and explicit declaration of column types](\ref jitting)
80 - [User-defined custom actions](\ref generic-actions)
81 - [Dataset joins with friend trees](\ref friends)
82 - [Reading data formats other than ROOT trees](\ref other-file-formats)
83 - [Computation graphs (storing and reusing sets of transformations)](\ref callgraphs)
84 - [Visualizing the computation graph](\ref representgraph)
85 - [Activating RDataFrame execution logs](\ref rdf-logging)
86 - [Creating an RDataFrame from a dataset specification file](\ref rdf-from-spec)
87 - [Adding a progress bar](\ref progressbar)
88 - [Working with missing values in the dataset](\ref missing-values)
89 - [Dealing with NaN or Inf values in the dataset](\ref special-values)
90 - [Translating TTree::Draw to RDataFrame](\ref rosetta-stone)
91- [Python interface](classROOT_1_1RDataFrame.html#python)
92- <a class="el" href="classROOT_1_1RDataFrame.html#reference" onclick="javascript:toggleInherit('pub_methods_classROOT_1_1RDF_1_1RInterface')">Class reference</a>
93
94\anchor cheatsheet
95## Cheat sheet
96These are the operations which can be performed with RDataFrame.
97
98### Transformations
99Transformations are a way to manipulate the data.
100
101| **Transformation** | **Description** |
102|------------------|--------------------|
103| Alias() | Introduce an alias for a particular column name. |
104| DefaultValueFor() | If the value of the input column is missing, provide a default value instead. |
105| Define() | Create a new column in the dataset. Example usages include adding a column that contains the invariant mass of a particle, or a selection of elements of an array (e.g. only the `pt`s of "good" muons). |
106| DefinePerSample() | Define a new column that is updated when the input sample changes, e.g. when switching tree being processed in a chain. |
107| DefineSlot() | Same as Define(), but the user-defined function must take an extra `unsigned int slot` as its first parameter. `slot` will take a different value, in the range [0, nThread-1], for each thread of execution. This is meant as a helper in writing thread-safe Define() transformations when using RDataFrame after ROOT::EnableImplicitMT(). DefineSlot() works just as well with single-thread execution: in that case `slot` will always be `0`. |
108| DefineSlotEntry() | Same as DefineSlot(), but the entry number is passed in addition to the slot number. This is meant as a helper in case the expression depends on the entry number. For details about entry numbers in multi-threaded runs, see [here](\ref helper-cols). |
109| Filter() | Filter rows based on user-defined conditions. |
110| FilterAvailable() | Specialized Filter. If the value of the input column is available, keep the entry, otherwise discard it. |
111| FilterMissing() | Specialized Filter. If the value of the input column is missing, keep the entry, otherwise discard it. |
112| Range() | Filter rows based on entry number (single-thread only). |
113| Redefine() | Overwrite the value and/or type of an existing column. See Define() for more information. |
114| RedefineSlot() | Overwrite the value and/or type of an existing column. See DefineSlot() for more information. |
115| RedefineSlotEntry() | Overwrite the value and/or type of an existing column. See DefineSlotEntry() for more information. |
116| Vary() | Register systematic variations for an existing column. Varied results are then extracted via VariationsFor(). |
117
118
119### Actions
120Actions aggregate data into a result. Each one is described in more detail in the reference guide.
121
122In the following, whenever we say an action "returns" something, we always mean it returns a smart pointer to it. Actions only act on events that pass all preceding filters.
123
124Lazy actions only trigger the event loop when one of the results is accessed for the first time, making it easy to
125produce many different results in one event loop. Instant actions trigger the event loop instantly.
126
127
128| **Lazy action** | **Description** |
129|------------------|-----------------|
130| Aggregate() | Execute a user-defined accumulation operation on the processed column values. |
131| Book() | Book execution of a custom action using a user-defined helper object. |
132| Cache() | Cache column values in memory. Custom columns can be cached as well, filtered entries are not cached. Users can specify which columns to save (default is all). |
133| Count() | Return the number of events processed. Useful e.g. to get a quick count of the number of events passing a Filter. |
134| Display() | Provides a printable representation of the dataset contents. The method returns a ROOT::RDF::RDisplay() instance which can print a tabular representation of the data or return it as a string. |
135| Fill() | Fill a user-defined object with the values of the specified columns, as if by calling `Obj.Fill(col1, col2, ...)`. |
136| Graph() | Fills a TGraph with the two columns provided. If multi-threading is enabled, the order of the points may not be the one expected, it is therefore suggested to sort if before drawing. |
137| GraphAsymmErrors() | Fills a TGraphAsymmErrors. Should be used for any type of graph with errors, including cases with errors on one of the axes only. If multi-threading is enabled, the order of the points may not be the one expected, it is therefore suggested to sort if before drawing. |
138| Histo1D(), Histo2D(), Histo3D() | Fill a one-, two-, three-dimensional histogram with the processed column values. |
139| HistoND() | Fill an N-dimensional histogram with the processed column values. |
140| HistoNSparseD() | Fill an N-dimensional sparse histogram with the processed column values. Memory is allocated only for non-empty bins. |
141| Max() | Return the maximum of processed column values. If the type of the column is inferred, the return type is `double`, the type of the column otherwise.|
142| Mean() | Return the mean of processed column values.|
143| Min() | Return the minimum of processed column values. If the type of the column is inferred, the return type is `double`, the type of the column otherwise.|
144| Profile1D(), Profile2D() | Fill a one- or two-dimensional profile with the column values that passed all filters. |
145| Reduce() | Reduce (e.g. sum, merge) entries using the function (lambda, functor...) passed as argument. The function must have signature `T(T,T)` where `T` is the type of the column. Return the final result of the reduction operation. An optional parameter allows initialization of the result object to non-default values. |
146| Report() | Obtain statistics on how many entries have been accepted and rejected by the filters. See the section on [named filters](#named-filters-and-cutflow-reports) for a more detailed explanation. The method returns a ROOT::RDF::RCutFlowReport instance which can be queried programmatically to get information about the effects of the individual cuts. |
147| Stats() | Return a TStatistic object filled with the input columns. |
148| StdDev() | Return the unbiased standard deviation of the processed column values. |
149| Sum() | Return the sum of the values in the column. If the type of the column is inferred, the return type is `double`, the type of the column otherwise. |
150| Take() | Extract a column from the dataset as a collection of values, e.g. a `std::vector<float>` for a column of type `float`. |
151
152| **Instant action** | **Description** |
153|---------------------|-----------------|
154| Foreach() | Execute a user-defined function on each entry. Users are responsible for the thread-safety of this callable when executing with implicit multi-threading enabled. |
155| ForeachSlot() | Same as Foreach(), but the user-defined function must take an extra `unsigned int slot` as its first parameter. `slot` will take a different value, `0` to `nThreads - 1`, for each thread of execution. This is meant as a helper in writing thread-safe Foreach() actions when using RDataFrame after ROOT::EnableImplicitMT(). ForeachSlot() works just as well with single-thread execution: in that case `slot` will always be `0`. |
156| Snapshot() | Write the processed dataset to disk, in a new TTree or RNTuple and TFile. Custom columns can be saved as well, filtered entries are not saved. Users can specify which columns to save (default is all nominal columns). Columns resulting from Vary() can be included by setting the corresponding flag in \ref RSnapshotOptions. Snapshot, by default, overwrites the output file if it already exists. Snapshot() can be made *lazy* setting the appropriate flag in the snapshot options.|
157
158
159### Queries
160
161These operations do not modify the dataframe or book computations but simply return information on the RDataFrame object.
162
163| **Operation** | **Description** |
164|---------------------|-----------------|
165| Describe() | Get useful information describing the dataframe, e.g. columns and their types. |
166| GetColumnNames() | Get the names of all the available columns of the dataset. |
167| GetColumnType() | Return the type of a given column as a string. |
168| GetColumnTypeNamesList() | Return the list of type names of columns in the dataset. |
169| GetDefinedColumnNames() | Get the names of all the defined columns. |
170| GetFilterNames() | Return the names of all filters in the computation graph. |
171| GetNRuns() | Return the number of event loops run by this RDataFrame instance so far. |
172| GetNSlots() | Return the number of processing slots that RDataFrame will use during the event loop (i.e. the concurrency level). |
173| ROOT::RDF::SaveGraph() | Store the computation graph of an RDataFrame in [DOT format (graphviz)](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) for easy inspection. See the [relevant section](\ref representgraph) for details. |
174
175\anchor rdf_intro
176## Introduction
177Users define their analysis as a sequence of operations to be performed on the dataframe object; the framework
178takes care of the management of the loop over entries as well as low-level details such as I/O and parallelization.
179RDataFrame provides methods to perform most common operations required by ROOT analyses;
180at the same time, users can just as easily specify custom code that will be executed in the event loop.
181
182RDataFrame is built with a *modular* and *flexible* workflow in mind, summarised as follows:
183
1841. Construct a dataframe object by specifying a dataset. RDataFrame supports TTree as well as TChain, [CSV files](https://root.cern/doc/master/df014__CSVDataSource_8C.html), [SQLite files](https://root.cern/doc/master/df027__SQliteDependencyOverVersion_8C.html), [RNTuples](https://root.cern/doc/master/structROOT_1_1Experimental_1_1RNTuple.html), and it can be extended to custom data formats. From Python, [NumPy arrays can be imported into RDataFrame](https://root.cern/doc/master/df032__MakeNumpyDataFrame_8py.html) as well.
185
1862. Transform the dataframe by:
187
188 - [Applying filters](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#transformations). This selects only specific rows of the dataset.
189
190 - [Creating custom columns](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#transformations). Custom columns can, for example, contain the results of a computation that must be performed for every row of the dataset.
191
1923. [Produce results](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#actions). *Actions* are used to aggregate data into results. Most actions are *lazy*, i.e. they are not executed on the spot, but registered with RDataFrame and executed only when a result is accessed for the first time.
193
194Make sure to book all transformations and actions before you access the contents of any of the results. This lets RDataFrame accumulate work and then produce all results at the same time, upon first access to any of them.
195
196The following table shows how analyses based on TTreeReader and TTree::Draw() translate to RDataFrame. Follow the
197[crash course](#crash-course) to discover more idiomatic and flexible ways to express analyses with RDataFrame.
198<table>
199<tr>
200 <td>
201 <b>TTreeReader</b>
202 </td>
203 <td>
204 <b>ROOT::RDataFrame</b>
205 </td>
206</tr>
207<tr>
208 <td>
209~~~{.cpp}
210TTreeReader reader("myTree", file);
211TTreeReaderValue<A_t> a(reader, "A");
212TTreeReaderValue<B_t> b(reader, "B");
213TTreeReaderValue<C_t> c(reader, "C");
214while(reader.Next()) {
215 if(IsGoodEvent(*a, *b, *c))
216 DoStuff(*a, *b, *c);
217}
218~~~
219 </td>
220 <td>
221~~~{.cpp}
222ROOT::RDataFrame d("myTree", file, {"A", "B", "C"});
223d.Filter(IsGoodEvent).Foreach(DoStuff);
224~~~
225 </td>
226</tr>
227<tr>
228 <td>
229 <b>TTree::Draw</b>
230 </td>
231 <td>
232 <b>ROOT::RDataFrame</b>
233 </td>
234</tr>
235<tr>
236 <td>
237~~~{.cpp}
238auto *tree = file->Get<TTree>("myTree");
239tree->Draw("x", "y > 2");
240~~~
241 </td>
242 <td>
243~~~{.cpp}
244ROOT::RDataFrame df("myTree", file);
245auto h = df.Filter("y > 2").Histo1D("x");
246h->Draw()
247~~~
248 </td>
249</tr>
250<tr>
251 <td>
252~~~{.cpp}
253tree->Draw("jet_eta", "weight*(event == 1)");
254~~~
255 </td>
256 <td>
257~~~{.cpp}
258df.Filter("event == 1").Histo1D("jet_eta", "weight");
259// or the fully compiled version:
260df.Filter([] (ULong64_t e) { return e == 1; }, {"event"}).Histo1D<RVec<float>>("jet_eta", "weight");
261~~~
262 </td>
263</tr>
264<tr>
265 <td>
266~~~{cpp}
267// object selection: for each event, fill histogram with array of selected pts
268tree->Draw('Muon_pt', 'Muon_pt > 100');
269~~~
270 </td>
271 <td>
272~~~{cpp}
273// with RDF, arrays are read as ROOT::VecOps::RVec objects
274df.Define("good_pt", "Muon_pt[Muon_pt > 100]").Histo1D("good_pt")
275~~~
276 </td>
277</tr>
278</table>
279
280\anchor crash-course
281## Crash course
282All snippets of code presented in the crash course can be executed in the ROOT interpreter. Simply precede them with
283~~~{.cpp}
284using namespace ROOT; // RDataFrame's namespace
285~~~
286which is omitted for brevity. The terms "column" and "branch" are used interchangeably.
287
288### Creating an RDataFrame
289RDataFrame's constructor is where the user specifies the dataset and, optionally, a default set of columns that
290operations should work with. Here are the most common methods to construct an RDataFrame object:
291~~~{.cpp}
292// single file -- all constructors are equivalent
293TFile *f = TFile::Open("file.root");
294TTree *t = f.Get<TTree>("treeName");
295
296ROOT::RDataFrame d1("treeName", "file.root");
297ROOT::RDataFrame d2("treeName", f); // same as TTreeReader
298ROOT::RDataFrame d3(*t);
299
300// multiple files -- all constructors are equivalent
301TChain chain("myTree");
302chain.Add("file1.root");
303chain.Add("file2.root");
304
305ROOT::RDataFrame d4("myTree", {"file1.root", "file2.root"});
306std::vector<std::string> files = {"file1.root", "file2.root"};
307ROOT::RDataFrame d5("myTree", files);
308ROOT::RDataFrame d6("myTree", "file*.root"); // the glob is passed as-is to TChain's constructor
309ROOT::RDataFrame d7(chain);
310~~~
311Additionally, users can construct an RDataFrame with no data source by passing an integer number. This is the number of rows that
312will be generated by this RDataFrame.
313~~~{.cpp}
314ROOT::RDataFrame d(10); // a RDF with 10 entries (and no columns/branches, for now)
315d.Foreach([] { static int i = 0; std::cout << i++ << std::endl; }); // silly example usage: count to ten
316~~~
317This is useful to generate simple datasets on the fly: the contents of each event can be specified with Define() (explained below). For example, we have used this method to generate [Pythia](https://pythia.org/) events and write them to disk in parallel (with the Snapshot action).
318
319For data sources other than TTrees and TChains, RDataFrame objects are constructed using ad-hoc factory functions (see e.g. FromCSV(), FromSqlite(), FromArrow()):
320
321~~~{.cpp}
322auto df = ROOT::RDF::FromCSV("input.csv");
323// use df as usual
324~~~
325
326### Filling a histogram
327Let's now tackle a very common task, filling a histogram:
328~~~{.cpp}
329// Fill a TH1D with the "MET" branch
330ROOT::RDataFrame d("myTree", "file.root");
331auto h = d.Histo1D("MET");
332h->Draw();
333~~~
334The first line creates an RDataFrame associated to the TTree "myTree". This tree has a branch named "MET".
335
336Histo1D() is an *action*; it returns a smart pointer (a ROOT::RDF::RResultPtr, to be precise) to a TH1D histogram filled
337with the `MET` of all events. If the quantity stored in the column is a collection (e.g. a vector or an array), the
338histogram is filled with all vector elements for each event.
339
340You can use the objects returned by actions as if they were pointers to the desired results. There are many other
341possible [actions](\ref cheatsheet), and all their results are wrapped in smart pointers; we'll see why in a minute.
342
343### Applying a filter
344Let's say we want to cut over the value of branch "MET" and count how many events pass this cut. This is one way to do it:
345~~~{.cpp}
346ROOT::RDataFrame d("myTree", "file.root");
347auto c = d.Filter("MET > 4.").Count(); // computations booked, not run
348std::cout << *c << std::endl; // computations run here, upon first access to the result
349~~~
350The filter string (which must contain a valid C++ expression) is applied to the specified columns for each event;
351the name and types of the columns are inferred automatically. The string expression is required to return a `bool`
352which signals whether the event passes the filter (`true`) or not (`false`).
353
354You can think of your data as "flowing" through the chain of calls, being transformed, filtered and finally used to
355perform actions. Multiple Filter() calls can be chained one after another.
356
357Using string filters is nice for simple things, but they are limited to specifying the equivalent of a single return
358statement or the body of a lambda, so it's cumbersome to use strings with more complex filters. They also add a small
359runtime overhead, as ROOT needs to just-in-time compile the string into C++ code. When more freedom is required or
360runtime performance is very important, a C++ callable can be specified instead (a lambda in the following snippet,
361but it can be any kind of function or even a functor class), together with a list of column names.
362This snippet is analogous to the one above:
363~~~{.cpp}
364ROOT::RDataFrame d("myTree", "file.root");
365auto metCut = [](double x) { return x > 4.; }; // a C++11 lambda function checking "x > 4"
366auto c = d.Filter(metCut, {"MET"}).Count();
367std::cout << *c << std::endl;
368~~~
369
370An example of a more complex filter expressed as a string containing C++ code is shown below
371
372~~~{.cpp}
373ROOT::RDataFrame d("myTree", "file.root");
374auto df = d.Define("p", "std::array<double, 4> p{px, py, pz}; return p;")
375 .Filter("double p2 = 0.0; for (auto&& x : p) p2 += x*x; return sqrt(p2) < 10.0;");
376~~~
377
378The code snippet above defines a column `p` that is a fixed-size array using the component column names and then
379filters on its magnitude by looping over its elements. It must be noted that the usage of strings to define columns
380like the one above is currently the only possibility when using PyROOT. When writing expressions as such, only constants
381and data coming from other columns in the dataset can be involved in the code passed as a string. Local variables and
382functions cannot be used, since the interpreter will not know how to find them. When capturing local state is necessary,
383it must first be declared to the ROOT C++ interpreter.
384
385More information on filters and how to use them to automatically generate cutflow reports can be found [below](#Filters).
386
387### Defining custom columns
388Let's now consider the case in which "myTree" contains two quantities "x" and "y", but our analysis relies on a derived
389quantity `z = sqrt(x*x + y*y)`. Using the Define() transformation, we can create a new column in the dataset containing
390the variable "z":
391~~~{.cpp}
392ROOT::RDataFrame d("myTree", "file.root");
393auto sqrtSum = [](double x, double y) { return sqrt(x*x + y*y); };
394auto zMean = d.Define("z", sqrtSum, {"x","y"}).Mean("z");
395std::cout << *zMean << std::endl;
396~~~
397Define() creates the variable "z" by applying `sqrtSum` to "x" and "y". Later in the chain of calls we refer to
398variables created with Define() as if they were actual tree branches/columns, but they are evaluated on demand, at most
399once per event. As with filters, Define() calls can be chained with other transformations to create multiple custom
400columns. Define() and Filter() transformations can be concatenated and intermixed at will.
401
402As with filters, it is possible to specify new columns as string expressions. This snippet is analogous to the one above:
403~~~{.cpp}
404ROOT::RDataFrame d("myTree", "file.root");
405auto zMean = d.Define("z", "sqrt(x*x + y*y)").Mean("z");
406std::cout << *zMean << std::endl;
407~~~
408
409Again the names of the columns used in the expression and their types are inferred automatically. The string must be
410valid C++ and it is just-in-time compiled. The process has a small runtime overhead and like with filters it is currently the only possible approach when using PyROOT.
411
412Previously, when showing the different ways an RDataFrame can be created, we showed a constructor that takes a
413number of entries as a parameter. In the following example we show how to combine such an "empty" RDataFrame with Define()
414transformations to create a dataset on the fly. We then save the generated data on disk using the Snapshot() action.
415~~~{.cpp}
416ROOT::RDataFrame d(100); // an RDF that will generate 100 entries (currently empty)
417int x = -1;
418auto d_with_columns = d.Define("x", []()->int { return ++x; }).Define("xx", []()->int { return x*x; });
419d_with_columns.Snapshot("myNewTree", "newfile.root");
420~~~
421This example is slightly more advanced than what we have seen so far. First, it makes use of lambda captures (a
422simple way to make external variables available inside the body of C++ lambdas) to act on the same variable `x` from
423both Define() transformations. Second, we have *stored* the transformed dataframe in a variable. This is always
424possible, since at each point of the transformation chain users can store the status of the dataframe for further use (more
425on this [below](#callgraphs)).
426
427You can read more about defining new columns [here](#custom-columns).
428
429\image html RDF_Graph.png "A graph composed of two branches, one starting with a filter and one with a define. The end point of a branch is always an action."
430
431
432### Running on a range of entries
433It is sometimes necessary to limit the processing of the dataset to a range of entries. For this reason, the RDataFrame
434offers the concept of ranges as a node of the RDataFrame chain of transformations; this means that filters, columns and
435actions can be concatenated to and intermixed with Range()s. If a range is specified after a filter, the range will act
436exclusively on the entries passing the filter -- it will not even count the other entries! The same goes for a Range()
437hanging from another Range(). Here are some commented examples:
438~~~{.cpp}
439ROOT::RDataFrame d("myTree", "file.root");
440// Here we store a dataframe that loops over only the first 30 entries in a variable
441auto d30 = d.Range(30);
442// This is how you pick all entries from 15 onwards
443auto d15on = d.Range(15, 0);
444// We can specify a stride too, in this case we pick an event every 3
445auto d15each3 = d.Range(0, 15, 3);
446~~~
447Note that ranges are not available when multi-threading is enabled. More information on ranges is available
448[here](#ranges).
449
450### Executing multiple actions in the same event loop
451As a final example let us apply two different cuts on branch "MET" and fill two different histograms with the "pt_v" of
452the filtered events.
453By now, you should be able to easily understand what is happening:
454~~~{.cpp}
455RDataFrame d("treeName", "file.root");
456auto h1 = d.Filter("MET > 10").Histo1D("pt_v");
457auto h2 = d.Histo1D("pt_v");
458h1->Draw(); // event loop is run once here
459h2->Draw("SAME"); // no need to run the event loop again
460~~~
461RDataFrame executes all above actions by **running the event-loop only once**. The trick is that actions are not
462executed at the moment they are called, but they are **lazy**, i.e. delayed until the moment one of their results is
463accessed through the smart pointer. At that time, the event loop is triggered and *all* results are produced
464simultaneously.
465
466### Properly exploiting RDataFrame laziness
467
468For yet another example of the difference between the correct and incorrect running of the event-loop, see the following
469two code snippets. We assume our ROOT file has branches a, b and c.
470
471The correct way - the dataset is only processed once.
472~~~{.py}
473df_correct = ROOT.RDataFrame(treename, filename);
474
475h_a = df_correct.Histo1D("a")
476h_b = df_correct.Histo1D("b")
477h_c = df_correct.Histo1D("c")
478
479h_a_val = h_a.GetValue()
480h_b_val = h_b.GetValue()
481h_c_val = h_c.GetValue()
482
483print(f"How many times was the data set processed? {df_wrong.GetNRuns()} time.") # The answer will be 1 time.
484~~~
485
486An incorrect way - the dataset is processed three times.
487~~~{.py}
488df_incorrect = ROOT.RDataFrame(treename, filename);
489
490h_a = df_incorrect.Histo1D("a")
491h_a_val = h_a.GetValue()
492
493h_b = df_incorrect.Histo1D("b")
494h_b_val = h_b.GetValue()
495
496h_c = df_incorrect.Histo1D("c")
497h_c_val = h_c.GetValue()
498
499print(f"How many times was the data set processed? {df_wrong.GetNRuns()} times.") # The answer will be 3 times.
500~~~
501
502It is therefore good practice to declare all your transformations and actions *before* accessing their results, allowing
503RDataFrame to run the loop once and produce all results in one go.
504
505### Going parallel
506Let's say we would like to run the previous examples in parallel on several cores, dividing events fairly between cores.
507The only modification required to the snippets would be the addition of this line *before* constructing the main
508dataframe object:
509~~~{.cpp}
510ROOT::EnableImplicitMT();
511~~~
512Simple as that. More details are given [below](#parallel-execution).
513
514\anchor working_with_collections
515## Working with collections and object selections
516
517RDataFrame reads collections as the special type [ROOT::RVec](https://root.cern/doc/master/classROOT_1_1VecOps_1_1RVec.html): for example, a column containing an array of floating point numbers can be read as a ROOT::RVecF. C-style arrays (with variable or static size), STL vectors and most other collection types can be read this way.
518
519RVec is a container similar to std::vector (and can be used just like a std::vector) but it also offers a rich interface to operate on the array elements in a vectorised fashion, similarly to Python's NumPy arrays.
520
521For example, to fill a histogram with the "pt" of selected particles for each event, Define() can be used to create a column that contains the desired array elements as follows:
522
523~~~{.cpp}
524// h is filled with all the elements of `good_pts`, for each event
525auto h = df.Define("good_pts", [](const ROOT::RVecF &pt) { return pt[pt > 0]; })
526 .Histo1D("good_pts");
527~~~
528
529And in Python:
530
531~~~{.py}
532h = df.Define("good_pts", "pt[pt > 0]").Histo1D("good_pts")
533~~~
534
535Learn more at ROOT::VecOps::RVec.
536
537ROOT provides convenience utility functions to work with RVec which are available in the [ROOT::VecOps namespace](\ref vecops)
538
539\anchor transformations
540## Transformations: manipulating data
541\anchor Filters
542### Filters
543A filter is created through a call to `Filter(f, columnList)` or `Filter(filterString)`. In the first overload, `f` can
544be a function, a lambda expression, a functor class, or any other callable object. It must return a `bool` signalling
545whether the event has passed the selection (`true`) or not (`false`). It should perform "read-only" operations on the
546columns, and should not have side-effects (e.g. modification of an external or static variable) to ensure correctness
547when implicit multi-threading is active. The second overload takes a string with a valid C++ expression in which column
548names are used as variable names (e.g. `Filter("x[0] + x[1] > 0")`). This is a convenience feature that comes with a
549certain runtime overhead: C++ code has to be generated on the fly from this expression before using it in the event
550loop. See the paragraph about "Just-in-time compilation" below for more information.
551
552RDataFrame only evaluates filters when necessary: if multiple filters are chained one after another, they are executed
553in order and the first one returning `false` causes the event to be discarded and triggers the processing of the next
554entry. If multiple actions or transformations depend on the same filter, that filter is not executed multiple times for
555each entry: after the first access it simply serves a cached result.
556
557\anchor named-filters-and-cutflow-reports
558#### Named filters and cutflow reports
559An optional string parameter `name` can be passed to the Filter() method to create a **named filter**. Named filters
560work as usual, but also keep track of how many entries they accept and reject.
561
562Statistics are retrieved through a call to the Report() method:
563
564- when Report() is called on the main RDataFrame object, it returns a ROOT::RDF::RResultPtr<RCutFlowReport> relative to all
565named filters declared up to that point
566- when called on a specific node (e.g. the result of a Define() or Filter()), it returns a ROOT::RDF::RResultPtr<RCutFlowReport>
567relative all named filters in the section of the chain between the main RDataFrame and that node (included).
568
569Stats are stored in the same order as named filters have been added to the graph, and *refer to the latest event-loop*
570that has been run using the relevant RDataFrame.
571
572\anchor ranges
573### Ranges
574When RDataFrame is not being used in a multi-thread environment (i.e. no call to EnableImplicitMT() was made),
575Range() transformations are available. These act very much like filters but instead of basing their decision on
576a filter expression, they rely on `begin`,`end` and `stride` parameters.
577
578- `begin`: initial entry number considered for this range.
579- `end`: final entry number (excluded) considered for this range. 0 means that the range goes until the end of the dataset.
580- `stride`: process one entry of the [begin, end) range every `stride` entries. Must be strictly greater than 0.
581
582The actual number of entries processed downstream of a Range() node will be `(end - begin)/stride` (or less if less
583entries than that are available).
584
585Note that ranges act "locally", not based on the global entry count: `Range(10,50)` means "skip the first 10 entries
586*that reach this node*, let the next 40 entries pass, then stop processing". If a range node hangs from a filter node,
587and the range has a `begin` parameter of 10, that means the range will skip the first 10 entries *that pass the
588preceding filter*.
589
590Ranges allow "early quitting": if all branches of execution of a functional graph reached their `end` value of
591processed entries, the event-loop is immediately interrupted. This is useful for debugging and quick data explorations.
592
593\anchor custom-columns
594### Custom columns
595Custom columns are created by invoking `Define(name, f, columnList)`. As usual, `f` can be any callable object
596(function, lambda expression, functor class...); it takes the values of the columns listed in `columnList` (a list of
597strings) as parameters, in the same order as they are listed in `columnList`. `f` must return the value that will be
598assigned to the temporary column.
599
600A new variable is created called `name`, accessible as if it was contained in the dataset from subsequent
601transformations/actions.
602
603Use cases include:
604- caching the results of complex calculations for easy and efficient multiple access
605- extraction of quantities of interest from complex objects
606- branch aliasing, i.e. changing the name of a branch
607
608An exception is thrown if the `name` of the new column/branch is already in use for another branch in the TTree.
609
610It is also possible to specify the quantity to be stored in the new temporary column as a C++ expression with the method
611`Define(name, expression)`. For example this invocation
612
613~~~{.cpp}
614df.Define("pt", "sqrt(px*px + py*py)");
615~~~
616
617will create a new column called "pt" the value of which is calculated starting from the columns px and py. The system
618builds a just-in-time compiled function starting from the expression after having deduced the list of necessary branches
619from the names of the variables specified by the user.
620
621#### Custom columns as function of slot and entry number
622
623It is possible to create custom columns also as a function of the processing slot and entry numbers. The methods that can
624be invoked are:
625- `DefineSlot(name, f, columnList)`. In this case the callable f has this signature `R(unsigned int, T1, T2, ...)`: the
626first parameter is the slot number which ranges from 0 to ROOT::GetThreadPoolSize() - 1.
627- `DefineSlotEntry(name, f, columnList)`. In this case the callable f has this signature `R(unsigned int, ULong64_t,
628T1, T2, ...)`: the first parameter is the slot number while the second one the number of the entry being processed.
629
630\anchor actions
631## Actions: getting results
632### Instant and lazy actions
633Actions can be **instant** or **lazy**. Instant actions are executed as soon as they are called, while lazy actions are
634executed whenever the object they return is accessed for the first time. As a rule of thumb, actions with a return value
635are lazy, the others are instant.
636
637### Return type of a lazy action
638
639When a lazy action is called, it returns a \link ROOT::RDF::RResultPtr ROOT::RDF::RResultPtr<T>\endlink, where T is the
640type of the result of the action. The final result will be stored in the `RResultPtr`, and can be retrieved by
641dereferencing it or via its `GetValue` method. Retrieving the result also starts the event loop if the result
642hasn't been produced yet.
643
644The RResultPtr shares ownership of the result object. To directly access result, use:
645~~~{.cpp}
646ROOT::RDF::RResultPtr<TH1D> histo = rdf.Histo1D(...);
647histo->Draw(); // Starts running the event loop
648~~~
649
650To return results from functions, a copy of the underlying shared_ptr can be obtained:
651~~~{.cpp}
652std::shared_ptr<TH1D> ProduceResult(const char *columnname) {
653 ROOT::RDF::RResultPtr<TH1D> histo = rdf.Histo1D(*h, columname);
654 return histo.GetSharedPtr(); // Runs the event loop
655}
656~~~
657If the result had been returned by reference or bare pointer, it would have gotten destroyed
658when the function exits.
659
660To share ownership but not produce the result ("keep it lazy"), copy the RResultPtr:
661~~~{.cpp}
662std::vector<RResultPtr<TH1D>> allHistograms;
663ROOT::RDF::RResultPtr<TH1D> BookHistogram(const char *columnname) {
664 ROOT::RDF::RResultPtr<TH1D> histo = rdf.Histo1D(*h, columname);
665 allHistograms.push_back(histo); // Will not produce the result yet
666 return histo;
667}
668~~~
669
670
671### Actions that return collections
672
673If the type of the return value of an action is a collection, e.g. `std::vector<int>`, you can iterate its elements
674directly through the wrapping `RResultPtr`:
675
676~~~{.cpp}
677ROOT::RDataFrame df{5};
678auto df1 = df.Define("x", []{ return 42; });
679for (const auto &el: df1.Take<int>("x")){
680 std::cout << "Element: " << el << "\n";
681}
682~~~
683
684~~~{.py}
685df = ROOT.RDataFrame(5).Define("x", "42")
686for el in df.Take[int]("x"):
687 print(f"Element: {el}")
688~~~
689
690### Actions and readers
691
692An action that needs values for its computations will request it from a reader, e.g. a column created via `Define` or
693available from the input dataset. The action will request values from each column of the list of input columns (either
694inferred or specified by the user), in order. For example:
695
696~~~{.cpp}
697ROOT::RDataFrame df{1};
698auto df1 = df.Define("x", []{ return 11; });
699auto df2 = df1.Define("y", []{ return 22; });
700auto graph = df2.Graph<int, int>("x","y");
701~~~
702
703The `Graph` action is going to request first the value from column "x", then that of column "y". Specifically, the order
704of execution of the operations of nodes in this branch of the computation graph is guaranteed to be top to bottom.
705
706\anchor rdf_distrdf
707## Distributed execution
708
709RDataFrame applications can be executed in parallel through distributed computing frameworks on a set of remote machines
710thanks to the Python package `ROOT.RDF.Distributed`. This **Python-only** package allows to scale the
711optimized performance RDataFrame can achieve on a single machine to multiple nodes at the same time. It is designed so
712that different backends can be easily plugged in, currently supporting [Apache Spark](http://spark.apache.org/) and
713[Dask](https://dask.org/). Here is a minimal example usage of distributed RDataFrame:
714
715~~~{.py}
716import ROOT
717from distributed import Client
718# It still accepts the same constructor arguments as traditional RDataFrame
719# but needs a client object which allows connecting to one of the supported
720# schedulers (read more info below)
721client = Client(...)
722df = ROOT.RDataFrame("mytree", "myfile.root", executor=client)
723
724# Continue the application with the traditional RDataFrame API
725sum = df.Filter("x > 10").Sum("y")
726h = df.Histo1D(("name", "title", 10, 0, 10), "x")
727
728print(sum.GetValue())
729h.Draw()
730~~~
731
732The main goal of this package is to support running any RDataFrame application distributedly. Nonetheless, not all
733parts of the RDataFrame API currently work with this package. The subset that is currently available is:
734- Alias
735- AsNumpy
736- Count
737- DefaultValueFor
738- Define
739- DefinePerSample
740- Filter
741- FilterAvailable
742- FilterMissing
743- Graph
744- Histo[1,2,3]D
745- HistoND, HistoNSparseD
746- Max
747- Mean
748- Min
749- Profile[1,2,3]D
750- Redefine
751- Snapshot
752- Stats
753- StdDev
754- Sum
755- Systematic variations: Vary and [VariationsFor](\ref ROOT::RDF::Experimental::VariationsFor).
756- Parallel submission of distributed graphs: [RunGraphs](\ref ROOT::RDF::RunGraphs).
757- Information about the dataframe: GetColumnNames.
758
759with support for more operations coming in the future. Currently, to the supported data sources belong TTree, TChain, RNTuple and RDatasetSpec.
760
761### Connecting to a Spark cluster
762
763In order to distribute the RDataFrame workload, you can connect to a Spark cluster you have access to through the
764official [Spark API](https://spark.apache.org/docs/latest/rdd-programming-guide.html#initializing-spark), then hook the
765connection instance to the distributed `RDataFrame` object like so:
766
767~~~{.py}
768import pyspark
769import ROOT
770
771# Create a SparkContext object with the right configuration for your Spark cluster
772conf = SparkConf().setAppName(appName).setMaster(master)
773sc = SparkContext(conf=conf)
774
775# The Spark RDataFrame constructor accepts an optional "sparkcontext" parameter
776# and it will distribute the application to the connected cluster
777df = ROOT.RDataFrame("mytree", "myfile.root", executor = sc)
778~~~
779
780Note that with the usage above the case of `executor = None` is not supported. One
781can explicitly create a `ROOT.RDF.Distributed.Spark.RDataFrame` object
782in order to get a default instance of
783[SparkContext](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.SparkContext.html)
784in case it is not already provided as argument.
785
786### Connecting to a Dask cluster
787
788Similarly, you can connect to a Dask cluster by creating your own connection object which internally operates with one
789of the cluster schedulers supported by Dask (more information in the
790[Dask distributed docs](http://distributed.dask.org/en/stable/)):
791
792~~~{.py}
793import ROOT
794from dask.distributed import Client
795# In a Python script the Dask client needs to be initalized in a context
796# Jupyter notebooks / Python session don't need this
797if __name__ == "__main__":
798 # With an already setup cluster that exposes a Dask scheduler endpoint
799 client = Client("dask_scheduler.domain.com:8786")
800
801 # The Dask RDataFrame constructor accepts the Dask Client object as an optional argument
802 df = ROOT.RDataFrame("mytree","myfile.root", executor=client)
803 # Proceed as usual
804 df.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
805~~~
806
807Note that with the usage above the case of `executor = None` is not supported. One
808can explicitly create a `ROOT.RDF.Distributed.Dask.RDataFrame` object
809in order to get a default instance of
810[distributed.Client](http://distributed.dask.org/en/stable/api.html#distributed.Client)
811in case it is not already provided as argument. This will run multiple processes
812on the local machine using all available cores.
813
814### Choosing the number of distributed tasks
815
816A distributed RDataFrame has internal logic to define in how many chunks the input dataset will be split before sending
817tasks to the distributed backend. Each task reads and processes one of said chunks. The logic is backend-dependent, but
818generically tries to infer how many cores are available in the cluster through the connection object. The number of
819tasks will be equal to the inferred number of cores. There are cases where the connection object of the chosen backend
820doesn't have information about the actual resources of the cluster. An example of this is when using Dask to connect to
821a batch system. The client object created at the beginning of the application does not automatically know how many cores
822will be available during distributed execution, since the jobs are submitted to the batch system after the creation of
823the connection. In such cases, the logic is to default to process the whole dataset in 2 tasks.
824
825The number of tasks submitted for distributed execution can be also set programmatically, by providing the optional
826keyword argument `npartitions` when creating the RDataFrame object. This parameter is accepted irrespectively of the
827backend used:
828
829~~~{.py}
830import ROOT
831
832if __name__ == "__main__":
833 # The `npartitions` optional argument tells the RDataFrame how many tasks are desired
834 df = ROOT.RDataFrame("mytree", "myfile.root", executor=SupportedExecutor(...), npartitions=NPARTITIONS)
835 # Proceed as usual
836 df.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
837~~~
838
839Note that when processing a TTree or TChain dataset, the `npartitions` value should not exceed the number of clusters in
840the dataset. The number of clusters in a TTree can be retrieved by typing `rootls -lt myfile.root` at a command line.
841
842### Distributed FromSpec
843
844RDataFrame can be also built from a JSON sample specification file using the FromSpec function. In distributed mode, two arguments need to be provided: the path to the specification
845jsonFile (same as for local RDF case) and an additional executor argument - in the same manner as for the RDataFrame constructors above - an executor can either be a spark connection or a dask client.
846If no second argument is given, the local version of FromSpec will be run. Here is an example of FromSpec usage in distributed RDF using either spark or dask backends.
847For more information on FromSpec functionality itself please refer to [FromSpec](\ref rdf-from-spec) documentation. Note that adding metadata and friend information is supported,
848but adding the global range will not be respected in the distributed execution.
849
850Using spark:
851~~~{.py}
852import pyspark
853import ROOT
854
855conf = SparkConf().setAppName(appName).setMaster(master)
856sc = SparkContext(conf=conf)
857
858# The FromSpec function accepts an optional "sparkcontext" parameter
859# and it will distribute the application to the connected cluster
860df_fromspec = ROOT.RDF.Experimental.FromSpec("myspec.json", executor = sc)
861# Proceed as usual
862df_fromspec.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
863~~~
864
865Using dask:
866~~~{.py}
867import ROOT
868from dask.distributed import Client
869
870if __name__ == "__main__":
871 client = Client("dask_scheduler.domain.com:8786")
872
873 # The FromSpec function accepts the Dask Client object as an optional argument
874 df_fromspec = ROOT.RDF.Experimental.FromSpec("myspec.json", executor=client)
875 # Proceed as usual
876 df_fromspec.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
877~~~
878
879### Distributed Snapshot
880
881The Snapshot operation behaves slightly differently when executed distributedly. First off, it requires the path
882supplied to the Snapshot call to be accessible from any worker of the cluster and from the client machine (in general
883it should be provided as an absolute path). Another important difference is that `n` separate files will be produced,
884where `n` is the number of dataset partitions. As with local RDataFrame, the result of a Snapshot on a distributed
885RDataFrame is another distributed RDataFrame on which we can define a new computation graph and run more distributed
886computations.
887
888### Distributed RunGraphs
889
890Submitting multiple distributed RDataFrame executions is supported through the RunGraphs function. Similarly to its
891local counterpart, the function expects an iterable of objects representing an RDataFrame action. Each action will be
892triggered concurrently to send multiple computation graphs to a distributed cluster at the same time:
893
894~~~{.py}
895import ROOT
896
897# Create 3 different dataframes and book an histogram on each one
898histoproxies = [
899 ROOT.RDataFrame(100, executor=SupportedExecutor(...))
900 .Define("x", "rdfentry_")
901 .Histo1D(("name", "title", 10, 0, 100), "x")
902 for _ in range(4)
903]
904
905# Execute the 3 computation graphs
906ROOT.RDF.RunGraphs(histoproxies)
907# Retrieve all the histograms in one go
908histos = [histoproxy.GetValue() for histoproxy in histoproxies]
909~~~
910
911Every distributed backend supports this feature and graphs belonging to different backends can be still triggered with
912a single call to RunGraphs (e.g. it is possible to send a Spark job and a Dask job at the same time).
913
914### Histogram models in distributed mode
915
916When calling a Histo*D operation in distributed mode, remember to pass to the function the model of the histogram to be
917computed, e.g. the axis range and the number of bins:
918
919~~~{.py}
920import ROOT
921
922if __name__ == "__main__":
923 df = ROOT.RDataFrame("mytree","myfile.root",executor=SupportedExecutor(...)).Define("x","someoperation")
924 # The model can be passed either as a tuple with the arguments in the correct order
925 df.Histo1D(("name", "title", 10, 0, 10), "x")
926 # Or by creating the specific struct
927 model = ROOT.RDF.TH1DModel("name", "title", 10, 0, 10)
928 df.Histo1D(model, "x")
929~~~
930
931Without this, two partial histograms resulting from two distributed tasks would have incompatible binning, thus leading
932to errors when merging them. Failing to pass a histogram model will raise an error on the client side, before starting
933the distributed execution.
934
935### Live visualization in distributed mode with dask
936
937The live visualization feature allows real-time data representation of plots generated during the execution
938of a distributed RDataFrame application.
939It enables visualizing intermediate results as they are computed across multiple nodes of a Dask cluster
940by creating a canvas and continuously updating it as partial results become available.
941
942The LiveVisualize() function can be imported from the Python package **ROOT.RDF.Distributed**:
943
944~~~{.py}
945import ROOT
946
947LiveVisualize = ROOT.RDF.Distributed.LiveVisualize
948~~~
949
950The function takes drawable objects (e.g. histograms) and optional callback functions as argument, it accepts 4 different input formats:
951
952- Passing a list or tuple of drawables:
953You can pass a list or tuple containing the plots you want to visualize. For example:
954
955~~~{.py}
956LiveVisualize([h_gaus, h_exp, h_random])
957~~~
958
959- Passing a list or tuple of drawables with a global callback function:
960You can also include a global callback function that will be applied to all plots. For example:
961
962~~~{.py}
963def set_fill_color(hist):
964 hist.SetFillColor("kBlue")
965
966LiveVisualize([h_gaus, h_exp, h_random], set_fill_color)
967~~~
968
969- Passing a Dictionary of drawables and callback functions:
970For more control, you can create a dictionary where keys are plots and values are corresponding (optional) callback functions. For example:
971
972~~~{.py}
973plot_callback_dict = {
974 graph: set_marker,
975 h_exp: fit_exp,
976 tprofile_2d: None
977}
978
979LiveVisualize(plot_callback_dict)
980~~~
981
982- Passing a Dictionary of drawables and callback functions with a global callback function:
983You can also combine a dictionary of plots and callbacks with a global callback function:
984
985~~~{.py}
986LiveVisualize(plot_callback_dict, write_to_tfile)
987~~~
988
989\note The allowed operations to pass to LiveVisualize are:
990 - Histo1D(), Histo2D(), Histo3D()
991 - Graph()
992 - Profile1D(), Profile2D()
993
994\warning The Live Visualization feature is only supported for the Dask backend.
995
996### Injecting C++ code and using external files into distributed RDF script
997
998Distributed RDF provides an interface for the users who want to inject the C++ code (via header files, shared libraries or declare the code directly)
999into their distributed RDF application, or their application needs to use information from external files which should be distributed
1000to the workers (for example, a JSON or a txt file with necessary parameters information).
1001
1002The examples below show the usage of these interface functions: firstly, how this is done in a local Python
1003RDF application and secondly, how it is done distributedly.
1004
1005#### Include and distribute header files.
1006
1007~~~{.py}
1008# Local RDataFrame script
1009ROOT.gInterpreter.AddIncludePath("myheader.hxx")
1010df.Define(...)
1011
1012# Distributed RDF script
1013ROOT.RDF.Distributed.DistributeHeaders("myheader.hxx")
1014df.Define(...)
1015~~~
1016
1017#### Load and distribute shared libraries
1018
1019~~~{.py}
1020# Local RDataFrame script
1021ROOT.gSystem.Load("my_library.so")
1022df.Define(...)
1023
1024# Distributed RDF script
1025ROOT.RDF.Distributed.DistributeSharedLibs("my_library.so")
1026df.Define(...)
1027~~~
1028
1029#### Declare and distribute the cpp code
1030
1031The cpp code is always available to all dataframes.
1032
1033~~~{.py}
1034# Local RDataFrame script
1035ROOT.gInterpreter.Declare("my_code")
1036df.Define(...)
1037
1038# Distributed RDF script
1039ROOT.RDF.Distributed.DistributeCppCode("my_code")
1040df.Define(...)
1041~~~
1042
1043#### Distribute additional files (other than headers or shared libraries).
1044
1045~~~{.py}
1046# Local RDataFrame script is not applicable here as local RDF application can simply access the external files it needs.
1047
1048# Distributed RDF script
1049ROOT.RDF.Distributed.DistributeFiles("my_file")
1050df.Define(...)
1051~~~
1052
1053\anchor parallel-execution
1054## Performance tips and parallel execution
1055As pointed out before in this document, RDataFrame can transparently perform multi-threaded event loops to speed up
1056the execution of its actions. Users have to call ROOT::EnableImplicitMT() *before* constructing the RDataFrame
1057object to indicate that it should take advantage of a pool of worker threads. **Each worker thread processes a distinct
1058subset of entries**, and their partial results are merged before returning the final values to the user.
1059
1060By default, RDataFrame will use as many threads as the hardware supports, using up **all** the resources on
1061a machine. This might be undesirable on shared computing resources such as a batch cluster. Therefore, when running on shared computing resources, use
1062~~~{.cpp}
1063ROOT::EnableImplicitMT(numThreads)
1064~~~
1065or export an environment variable:
1066~~~{.sh}
1067export ROOT_MAX_THREADS=numThreads
1068root.exe rdfAnalysis.cxx
1069# or
1070ROOT_MAX_THREADS=4 python rdfAnalysis.py
1071~~~
1072replacing `numThreads` with the number of CPUs/slots that were allocated for this job.
1073
1074\warning There are no guarantees on the order in which threads will process the batches of
1075entries. In particular, note that this means that, for multi-thread event loops, there is no
1076guarantee on the order in which Snapshot() will _write_ entries: they could be scrambled with respect to the input
1077dataset. The values of the special `rdfentry_` column will also not correspond to the entry numbers in the input dataset (e.g. TChain) in multi-threaded
1078runs. Likewise, Take(), AsNumpy(), ... do not preserve the original ordering.
1079
1080### Thread-safety of user-defined expressions
1081RDataFrame operations such as Histo1D() or Snapshot() are guaranteed to work correctly in multi-thread event loops.
1082User-defined expressions, such as strings or lambdas passed to Filter(), Define(), Foreach(), Reduce() or Aggregate()
1083will have to be thread-safe, i.e. it should be possible to call them concurrently from different threads.
1084
1085Note that simple Filter() and Define() transformations will inherently satisfy this requirement: Filter() / Define()
1086expressions will often be *pure* in the functional programming sense (no side-effects, no dependency on external state),
1087which eliminates all risks of race conditions.
1088
1089In order to facilitate writing of thread-safe operations, some RDataFrame features such as Foreach(), Define() or \link ROOT::RDF::RResultPtr::OnPartialResult OnPartialResult()\endlink
1090offer thread-aware counterparts (ForeachSlot(), DefineSlot(), \link ROOT::RDF::RResultPtr::OnPartialResultSlot OnPartialResultSlot()\endlink): their only difference is that they
1091will pass an extra `slot` argument (an unsigned integer) to the user-defined expression. When calling user-defined code
1092concurrently, RDataFrame guarantees that different threads will see different values of the `slot` parameter,
1093where `slot` will be a number between 0 and `GetNSlots() - 1`. Note that not all slot numbers may be reached, or some slots may be reached more often depending on how computation tasks are scheduled.
1094In other words, within a slot, computations run sequentially, and events are processed sequentially.
1095Note that the same slot might be associated to different threads over the course of a single event loop, but two threads
1096will never receive the same slot at the same time.
1097This extra parameter might facilitate writing safe parallel code by having each thread write/modify a different
1098processing slot, e.g. a different element of a list. See [here](#generic-actions) for an example usage of ForeachSlot().
1099
1100### Parallel execution of multiple RDataFrame event loops
1101A complex analysis may require multiple separate RDataFrame computation graphs to produce all desired results. This poses the challenge that the
1102event loops of each computation graph can be parallelized, but the different loops run sequentially, one after the other.
1103On many-core architectures it might be desirable to run different event loops concurrently to improve resource usage.
1104ROOT::RDF::RunGraphs() allows running multiple RDataFrame event loops concurrently:
1105~~~{.cpp}
1106ROOT::EnableImplicitMT();
1107ROOT::RDataFrame df1("tree1", "f1.root");
1108ROOT::RDataFrame df2("tree2", "f2.root");
1109auto histo1 = df1.Histo1D("x");
1110auto histo2 = df2.Histo1D("y");
1111
1112// just accessing result pointers, the event loops of separate RDataFrames run one after the other
1113histo1->Draw(); // runs first multi-thread event loop
1114histo2->Draw(); // runs second multi-thread event loop
1115
1116// alternatively, with ROOT::RDF::RunGraphs, event loops for separate computation graphs can run concurrently
1117ROOT::RDF::RunGraphs({histo1, histo2});
1118histo1->Draw(); // results can then be used as usual
1119~~~
1120
1121### Performance considerations
1122
1123To obtain the maximum performance out of RDataFrame, make sure to avoid just-in-time compiled versions of transformations and actions if at all possible.
1124For instance, `Filter("x > 0")` requires just-in-time compilation of the corresponding C++ logic, while the equivalent `Filter([](float x) { return x > 0.; }, {"x"})` does not.
1125Similarly, `Histo1D("x")` requires just-in-time compilation after the type of `x` is retrieved from the dataset, while `Histo1D<float>("x")` does not; the latter spelling
1126should be preferred for performance-critical applications.
1127
1128Python applications cannot easily specify template parameters or pass C++ callables to RDataFrame.
1129See [Python interface](classROOT_1_1RDataFrame.html#python) for possible ways to speed up hot paths in this case.
1130
1131Just-in-time compilation happens once, right before starting an event loop. To reduce the runtime cost of this step, make sure to book all operations *for all RDataFrame computation graphs*
1132before the first event loop is triggered: just-in-time compilation will happen once for all code required to be generated up to that point, also across different computation graphs.
1133
1134Also make sure not to count the just-in-time compilation time (which happens once before the event loop and does not depend on the size of the dataset) as part of the event loop runtime (which scales with the size of the dataset). RDataFrame has an experimental logging feature that simplifies measuring the time spent in just-in-time compilation and in the event loop (as well as providing some more interesting information). See [Activating RDataFrame execution logs](\ref rdf-logging).
1135
1136### Memory usage
1137
1138There are two reasons why RDataFrame may consume more memory than expected.
1139
1140#### 1. Histograms in multi-threaded mode
1141In multithreaded runs, each worker thread will create a local copy of histograms, which e.g. in case of many (possibly multi-dimensional) histograms with fine binning can result in significant memory consumption during the event loop.
1142The thread-local copies of the results are destroyed when the final result is produced. Reducing the number of threads or using coarser binning will reduce the memory usage.
1143For three-dimensional histograms, the number of clones can be reduced using ROOT::RDF::Experimental::ThreadsPerTH3().
1144~~~{.cpp}
1145#include "ROOT/RDFHelpers.hxx"
1146
1147// Make four threads share a TH3 instance:
1148ROOT::RDF::Experimental::ThreadsPerTH3(4);
1149ROOT::RDataFrame rdf(...);
1150~~~
1151
1152When TH3s are shared among threads, TH3D will either be filled under lock (slowing down the execution) or using atomics if C++20 is available. The latter is significantly faster.
1153The best value for `ThreadsPerTH3` depends on the computation graph that runs. Use lower numbers such as 4 for speed and higher memory consumption, and higher numbers such as 16 for
1154slower execution and memory savings.
1155
1156#### 2. Just-in-time compilation
1157Secondly, just-in-time compilation of string expressions or non-templated actions (see the previous paragraph) causes Cling, ROOT's C++ interpreter, to allocate some memory for the generated code that is only released at the end of the application. This commonly results in memory usage creep in long-running applications that create many RDataFrames one after the other. Possible mitigations include creating and running each RDataFrame event loop in a sub-process, or booking all operations for all different RDataFrame computation graphs before the first event loop is triggered, so that the interpreter is invoked only once for all computation graphs:
1158
1159~~~{.cpp}
1160// assuming df1 and df2 are separate computation graphs, do:
1161auto h1 = df1.Histo1D("x");
1162auto h2 = df2.Histo1D("y");
1163h1->Draw(); // we just-in-time compile everything needed by df1 and df2 here
1164h2->Draw("SAME");
1165
1166// do not:
1167auto h1 = df1.Histo1D("x");
1168h1->Draw(); // we just-in-time compile here
1169auto h2 = df2.Histo1D("y");
1170h2->Draw("SAME"); // we just-in-time compile again here, as the second Histo1D call is new
1171~~~
1172
1173\anchor more-features
1174## More features
1175Here is a list of the most important features that have been omitted in the "Crash course" for brevity.
1176You don't need to read all these to start using RDataFrame, but they are useful to save typing time and runtime.
1177
1178\anchor systematics
1179### Systematic variations
1180
1181Starting from ROOT v6.26, RDataFrame provides a flexible syntax to define systematic variations.
1182This is done in two steps: a) register variations for one or more existing columns using Vary() and b) extract variations
1183of normal RDataFrame results using \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()". In between these steps, no other change
1184to the analysis code is required: the presence of systematic variations for certain columns is automatically propagated
1185through filters, defines and actions, and RDataFrame will take these dependencies into account when producing varied
1186results. \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" is included in header `ROOT/RDFHelpers.hxx`. The compiled C++ programs must include this header
1187explicitly, this is not required for ROOT macros.
1188
1189An example usage of Vary() and \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" in C++:
1190
1191~~~{.cpp}
1192auto nominal_hx =
1193 df.Vary("pt", "ROOT::RVecD{pt*0.9f, pt*1.1f}", {"down", "up"})
1194 .Filter("pt > pt_cut")
1195 .Define("x", someFunc, {"pt"})
1196 .Histo1D<float>("x");
1197
1198// request the generation of varied results from the nominal_hx
1199ROOT::RDF::Experimental::RResultMap<TH1D> hx = ROOT::RDF::Experimental::VariationsFor(nominal_hx);
1200
1201// the event loop runs here, upon first access to any of the results or varied results:
1202hx["nominal"].Draw(); // same effect as nominal_hx->Draw()
1203hx["pt:down"].Draw("SAME");
1204hx["pt:up"].Draw("SAME");
1205~~~
1206
1207A shorter expression syntax is allowed for convenience (see the docs of the Vary overloads for more details):
1208
1209~~~{.cpp}
1210auto nominal_hx =
1211 df.Vary("pt", "{pt*0.9f, pt*1.1f}", {"down", "up"})
1212// The rest is the same as above
1213~~~
1214
1215A list of variation "tags" is passed as the last argument to Vary(). The tags give names to the varied values that are returned
1216as elements of an RVec of the appropriate C++ type. The number of variation tags must correspond to the number of elements of
1217this RVec (2 in the example above: the first element will correspond to the tag "down", the second
1218to the tag "up"). The _full_ variation name will be composed of the varied column name and the variation tags (e.g.
1219"pt:down", "pt:up" in this example). Python usage looks similar.
1220
1221Note how we use the "pt" column as usual in the Filter() and Define() calls and we simply use "x" as the value to fill
1222the resulting histogram. To produce the varied results, RDataFrame will automatically execute the Filter and Define
1223calls for each variation and fill the histogram with values and cuts that depend on the variation.
1224
1225There is no limitation to the complexity of a Vary() expression. Just like for the Define() and Filter() calls, users are
1226not limited to string expressions but they can also pass any valid C++ callable, including lambda functions and
1227complex functors. The callable can be applied to zero or more existing columns and it will always receive their
1228_nominal_ value in input.
1229
1230\note - Currently, VariationsFor() and RResultMap are in the `ROOT::RDF::Experimental` namespace, to indicate that these
1231 interfaces can still evolve and improve based on user feedback. Please send feedback on https://github.com/root-project/root.
1232
1233\note - The results of Display() cannot be varied (i.e. it is not possible to call \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()").
1234
1235See the Vary() method for more information and [this tutorial](https://root.cern/doc/master/df106__HiggsToFourLeptons_8C.html)
1236for an example usage of Vary and \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" in the analysis.
1237
1238
1239\anchor snapshot-with-variations
1240#### Snapshot with Variations
1241To combine Vary() and Snapshot(), the \ref ROOT::RDF::RSnapshotOptions::fIncludeVariations "fIncludeVariations" flag in \ref ROOT::RDF::RSnapshotOptions
1242"RSnapshotOptions" has to be set, as demonstrated in the following example:
1243
1244~~~{.cpp}
1245 ROOT::RDF::RSnapshotOptions options;
1246 options.fIncludeVariations = true;
1247
1248 auto s = ROOT::RDataFrame(N)
1249 .Define("x", [](ULong64_t e) -> float { return 10.f * e; }, {"rdfentry_"})
1250 .Vary(
1251 "x", [](float x) { return ROOT::RVecF{x - 0.5f, x + 0.5f}; }, {"x"}, 2, "xVar")
1252 .Define("y", [](float x) -> double { return -1. * x; }, {"x"})
1253 .Filter(cuts, {"x", "y"})
1254 .Snapshot("t", filename, {"x", "y"}, options);
1255~~~
1256
1257This snapshot action will create an output file with a dataset that includes the columns requested in the Snapshot() call
1258and their corresponding variations:
1259
1260| Row | `x` | `y` | `x__xVar_0` | `y__xVar_0` | `x__xVar_1` | `y__xVar_1` | `R_rdf_mask_t_0`|
1261|------|----------------:|----------------:|----------------:|----------------:|----------------:|----------------:|----------------:|
1262| 0 | 0 | -0 | -0.5 | 0.5 | 0.5 | -0.5 | 111 |
1263| 1 | 10 | -10 | 9.5 | -9.5 | 10.5 | -10.5 | 111 |
1264| 2 | 20 | -20 | 19.5 | -19.5 | 20.5 | -20.5 | 111 |
1265| 3 | 30 | -30 | 29.5 | -29.5 | 30.5 | -30.5 | 111 |
1266| 4 | 40 | -40 | 39.5 | -39.5 | 40.5 | -40.5 | 111 |
1267| 5 | 0 | 0 | 49.5 | -49.5 | 0 | 0 | 010 |
1268| 6 |`* Not written *`| | | | | | |
1269| 7 | 0 | 0 | 0 | 0 | 70.5 | -70.5 | 100 |
1270
1271"x" and "y" are the nominal values, and their variations are snapshot as `<ColumnName>__<VariationName>_<VariationTag>`.
1272
1273When Filters are employed, some variations might not pass the selection cuts (like the rows 5 and 7 in the table above).
1274In that case, RDataFrame will snapshot the filtered columns in a memory-efficient way by writing zero into the memory of fundamental types, or write a
1275default-constructed object in case of classes. If none of the filters pass like in row 6, the entire event is omitted from the snapshot.
1276
1277To tell apart a genuine `0` (like `x` in row 0) from a case where nominal or variation didn't pass a selection,
1278RDataFrame writes a bitmask for each event, see last column of the table above. Every bit indicates whether its
1279associated columns are valid. The bitmask is implemented as a 64-bit `std::bitset` in memory, written to the output
1280dataset as a `std::uin64_t`. For every 64 columns, a new bitmask column is added to the output dataset.
1281
1282For each column that gets varied, the nominal and all variation columns are each assigned a bit to denote whether their
1283entries are valid. A mapping of column names to the corresponding bitmask is placed in the same file as the output
1284dataset, with a name that follows the pattern `"R_rdf_column_to_bitmask_mapping_<NAME_OF_THE_DATASET>"`. It is of type
1285`std::unordered_map<std::string, std::pair<std::string, unsigned int>>`, and maps a column name to the name of the
1286bitmask column and the index of the relevant bit. For example, in the same file as the dataset "Events" there would be
1287an object named `R_rdf_column_to_bitmask_mapping_Events`. This object for example would describe a connection such as:
1288
1289~~~
1290muon_pt --> (R_rdf_mask_Events_0, 42)
1291~~~
1292
1293which means that the validity of the entries in `muon_pt` is established by the bit `42` in the bitmask found in the
1294column `R_rdf_mask_Events_0`.
1295
1296When RDataFrame opens a file, it checks for the existence of this mapping between columns and bitmasks, and loads it automatically if found. As such,
1297RDataFrame makes the treatment of the various bitmap maskings completely transparent to the user.
1298
1299In case certain values are labeled invalid by the corresponding bit, this will result in reading a missing value. The semantics of such a scenario follow the
1300rules described in the \ref missing-values "section on dealing with missing values" and can be dealt with accordingly.
1301
1302\note Snapshot with variations is currently restricted to single-threaded TTree snapshots.
1303
1304
1305#### Varying multiple columns in lockstep
1306
1307In the following Python snippet we use the Vary() signature that allows varying multiple columns simultaneously or
1308"in lockstep":
1309
1310~~~{.python}
1311df.Vary(["pt", "eta"],
1312 "RVec<RVecF>{{pt*0.9, pt*1.1}, {eta*0.9, eta*1.1}}",
1313 variationTags=["down", "up"],
1314 variationName="ptAndEta")
1315~~~
1316
1317The expression returns an RVec of two RVecs: each inner vector contains the varied values for one column. The
1318inner vectors follow the same ordering as the column names that are passed as the first argument. Besides the variation tags, in
1319this case we also have to explicitly pass the variation name (here: "ptAndEta") as the default column name does not exist.
1320
1321The above call will produce variations "ptAndEta:down" and "ptAndEta:up".
1322
1323#### Combining multiple variations
1324
1325Even if a result depends on multiple variations, only one variation is applied at a time, i.e. there will be no result produced
1326by applying multiple systematic variations at the same time.
1327For example, in the following example snippet, the RResultMap instance `all_h` will contain keys "nominal", "pt:down",
1328"pt:up", "eta:0", "eta:1", but no "pt:up&&eta:0" or similar:
1329
1330~~~{.cpp}
1331auto df = _df.Vary("pt",
1332 "ROOT::RVecD{pt*0.9, pt*1.1}",
1333 {"down", "up"})
1334 .Vary("eta",
1335 [](float eta) { return RVecF{eta*0.9f, eta*1.1f}; },
1336 {"eta"},
1337 2);
1338
1339auto nom_h = df.Histo2D(histoModel, "pt", "eta");
1340auto all_hs = VariationsFor(nom_h);
1341all_hs.GetKeys(); // returns {"nominal", "pt:down", "pt:up", "eta:0", "eta:1"}
1342~~~
1343
1344Note how we passed the integer `2` instead of a list of variation tags to the second Vary() invocation: this is a
1345shorthand that automatically generates tags 0 to N-1 (in this case 0 and 1).
1346
1347
1348\anchor rnode
1349### RDataFrame objects as function arguments and return values
1350RDataFrame variables/nodes are relatively cheap to copy and it's possible to both pass them to (or move them into)
1351functions and to return them from functions. However, in general each dataframe node will have a different C++ type,
1352which includes all available compile-time information about what that node does. One way to cope with this complication
1353is to use template functions and/or C++14 auto return types:
1354~~~{.cpp}
1355template <typename RDF>
1356auto ApplySomeFilters(RDF df)
1357{
1358 return df.Filter("x > 0").Filter([](int y) { return y < 0; }, {"y"});
1359}
1360~~~
1361
1362A possibly simpler, C++11-compatible alternative is to take advantage of the fact that any dataframe node can be
1363converted (implicitly or via an explicit cast) to the common type ROOT::RDF::RNode:
1364~~~{.cpp}
1365// a function that conditionally adds a Range to an RDataFrame node.
1366RNode MaybeAddRange(RNode df, bool mustAddRange)
1367{
1368 return mustAddRange ? df.Range(1) : df;
1369}
1370// use as :
1371ROOT::RDataFrame df(10);
1372auto maybeRangedDF = MaybeAddRange(df, true);
1373~~~
1374
1375The conversion to ROOT::RDF::RNode is cheap, but it will introduce an extra virtual call during the RDataFrame event
1376loop (in most cases, the resulting performance impact should be negligible). Python users can perform the conversion with the helper function `ROOT.RDF.AsRNode`.
1377
1378\anchor RDFCollections
1379### Storing RDataFrame objects in collections
1380
1381ROOT::RDF::RNode also makes it simple to store RDataFrame nodes in collections, e.g. a `std::vector<RNode>` or a `std::map<std::string, RNode>`:
1382
1383~~~{.cpp}
1384std::vector<ROOT::RDF::RNode> dfs;
1385dfs.emplace_back(ROOT::RDataFrame(10));
1386dfs.emplace_back(dfs[0].Define("x", "42.f"));
1387~~~
1388
1389\anchor callbacks
1390### Executing callbacks every N events
1391It's possible to schedule execution of arbitrary functions (callbacks) during the event loop.
1392Callbacks can be used e.g. to inspect partial results of the analysis while the event loop is running,
1393drawing a partially-filled histogram every time a certain number of new entries is processed, or
1394displaying a progress bar while the event loop runs.
1395
1396For example one can draw an up-to-date version of a result histogram every 100 entries like this:
1397~~~{.cpp}
1398auto h = df.Histo1D("x");
1399TCanvas c("c","x hist");
1400h.OnPartialResult(100, [&c](TH1D &h_) { c.cd(); h_.Draw(); c.Update(); });
1401// event loop runs here, this final `Draw` is executed after the event loop is finished
1402h->Draw();
1403~~~
1404
1405Callbacks are registered to a ROOT::RDF::RResultPtr and must be callables that takes a reference to the result type as argument
1406and return nothing. RDataFrame will invoke registered callbacks passing partial action results as arguments to them
1407(e.g. a histogram filled with a part of the selected events).
1408
1409Read more on ROOT::RDF::RResultPtr::OnPartialResult() and ROOT::RDF::RResultPtr::OnPartialResultSlot().
1410
1411\anchor default-branches
1412### Default column lists
1413When constructing an RDataFrame object, it is possible to specify a **default column list** for your analysis, in the
1414usual form of a list of strings representing branch/column names. The default column list will be used as a fallback
1415whenever a list specific to the transformation/action is not present. RDataFrame will take as many of these columns as
1416needed, ignoring trailing extra names if present.
1417~~~{.cpp}
1418// use "b1" and "b2" as default columns
1419ROOT::RDataFrame d1("myTree", "file.root", {"b1","b2"});
1420auto h = d1.Filter([](int b1, int b2) { return b1 > b2; }) // will act on "b1" and "b2"
1421 .Histo1D(); // will act on "b1"
1422
1423// just one default column this time
1424ROOT::RDataFrame d2("myTree", "file.root", {"b1"});
1425auto d2f = d2.Filter([](double b2) { return b2 > 0; }, {"b2"}) // we can still specify non-default column lists
1426auto min = d2f.Min(); // returns the minimum value of "b1" for the filtered entries
1427auto vals = d2f.Take<double>(); // return the values for all entries passing the selection as a vector
1428~~~
1429
1430\anchor helper-cols
1431### Special helper columns: rdfentry_ and rdfslot_
1432Every instance of RDataFrame is created with two special columns called `rdfentry_` and `rdfslot_`. The `rdfentry_`
1433column is of type `ULong64_t` and it holds the current entry number while `rdfslot_` is an `unsigned int`
1434holding the index of the current data processing slot.
1435For backwards compatibility reasons, the names `tdfentry_` and `tdfslot_` are also accepted.
1436These columns are ignored by operations such as [Cache](classROOT_1_1RDF_1_1RInterface.html#aaaa0a7bb8eb21315d8daa08c3e25f6c9)
1437or [Snapshot](classROOT_1_1RDF_1_1RInterface.html#a233b7723e498967f4340705d2c4db7f8).
1438
1439\warning Note that in multi-thread event loops the values of `rdfentry_` _do not_ correspond to what would be the entry numbers
1440of a TChain constructed over the same set of ROOT files, as the entries are processed in an unspecified order.
1441
1442\anchor jitting
1443### Just-in-time compilation: column type inference and explicit declaration of column types
1444C++ is a statically typed language: all types must be known at compile-time. This includes the types of the TTree
1445branches we want to work on. For filters, defined columns and some of the actions, **column types are deduced from the
1446signature** of the relevant filter function/temporary column expression/action function:
1447~~~{.cpp}
1448// here b1 is deduced to be `int` and b2 to be `double`
1449df.Filter([](int x, double y) { return x > 0 && y < 0.; }, {"b1", "b2"});
1450~~~
1451If we specify an incorrect type for one of the columns, an exception with an informative message will be thrown at
1452runtime, when the column value is actually read from the dataset: RDataFrame detects type mismatches. The same would
1453happen if we swapped the order of "b1" and "b2" in the column list passed to Filter().
1454
1455Certain actions, on the other hand, do not take a function as argument (e.g. Histo1D()), so we cannot deduce the type of
1456the column at compile-time. In this case **RDataFrame infers the type of the column** from the TTree itself. This
1457is why we never needed to specify the column types for all actions in the above snippets.
1458
1459When the column type is not a common one such as `int`, `double`, `char` or `float` it is nonetheless good practice to
1460specify it as a template parameter to the action itself, like this:
1461~~~{.cpp}
1462df.Histo1D("b1"); // OK, the type of "b1" is deduced at runtime
1463df.Min<MyNumber_t>("myObject"); // OK, "myObject" is deduced to be of type `MyNumber_t`
1464~~~
1465
1466Deducing types at runtime requires the just-in-time compilation of the relevant actions, which has a small runtime
1467overhead, so specifying the type of the columns as template parameters to the action is good practice when performance is a goal.
1468
1469When strings are passed as expressions to Filter() or Define(), fundamental types are passed as constants. This avoids certaincommon mistakes such as typing `x = 0` rather than `x == 0`:
1470
1471~~~{.cpp}
1472// this throws an error (note the typo)
1473df.Define("x", "0").Filter("x = 0");
1474~~~
1475
1476\anchor generic-actions
1477### User-defined custom actions
1478RDataFrame strives to offer a comprehensive set of standard actions that can be performed on each event. At the same
1479time, it allows users to inject their own action code to perform arbitrarily complex data reductions.
1480
1481#### Implementing custom actions with Book()
1482
1483Through the Book() method, users can implement a custom action and have access to the same features
1484that built-in RDataFrame actions have, e.g. hooks to events related to the start, end and execution of the
1485event loop, or the possibility to return a lazy RResultPtr to an arbitrary type of result:
1486
1487~~~{.cpp}
1488#include <ROOT/RDataFrame.hxx>
1489#include <memory>
1490
1491class MyCounter : public ROOT::Detail::RDF::RActionImpl<MyCounter> {
1492 std::shared_ptr<int> fFinalResult = std::make_shared<int>(0);
1493 std::vector<int> fPerThreadResults;
1494
1495public:
1496 // We use a public type alias to advertise the type of the result of this action
1497 using Result_t = int;
1498
1499 MyCounter(unsigned int nSlots) : fPerThreadResults(nSlots) {}
1500
1501 // Called before the event loop to retrieve the address of the result that will be filled/generated.
1502 std::shared_ptr<int> GetResultPtr() const { return fFinalResult; }
1503
1504 // Called at the beginning of the event loop.
1505 void Initialize() {}
1506
1507 // Called at the beginning of each processing task.
1508 void InitTask(TTreeReader *, int) {}
1509
1510 /// Called at every entry.
1511 void Exec(unsigned int slot)
1512 {
1513 fPerThreadResults[slot]++;
1514 }
1515
1516 // Called at the end of the event loop.
1517 void Finalize()
1518 {
1519 *fFinalResult = std::accumulate(fPerThreadResults.begin(), fPerThreadResults.end(), 0);
1520 }
1521
1522 // Called by RDataFrame to retrieve the name of this action.
1523 std::string GetActionName() const { return "MyCounter"; }
1524};
1525
1526int main() {
1527 ROOT::RDataFrame df(10);
1528 ROOT::RDF::RResultPtr<int> resultPtr = df.Book<>(MyCounter{df.GetNSlots()}, {});
1529 // The GetValue call triggers the event loop
1530 std::cout << "Number of processed entries: " << resultPtr.GetValue() << std::endl;
1531}
1532~~~
1533
1534See the Book() method for more information and [this tutorial](https://root.cern/doc/master/df018__customActions_8C.html)
1535for a more complete example.
1536
1537#### Injecting arbitrary code in the event loop with Foreach() and ForeachSlot()
1538
1539Foreach() takes a callable (lambda expression, free function, functor...) and a list of columns and
1540executes the callable on the values of those columns for each event that passes all upstream selections.
1541It can be used to perform actions that are not already available in the interface. For example, the following snippet
1542evaluates the root mean square of column "x":
1543~~~{.cpp}
1544// Single-thread evaluation of RMS of column "x" using Foreach
1545double sumSq = 0.;
1546unsigned int n = 0;
1547df.Foreach([&sumSq, &n](double x) { ++n; sumSq += x*x; }, {"x"});
1548std::cout << "rms of x: " << std::sqrt(sumSq / n) << std::endl;
1549~~~
1550In multi-thread runs, users are responsible for the thread-safety of the expression passed to Foreach():
1551thread will execute the expression concurrently.
1552The code above would need to employ some resource protection mechanism to ensure non-concurrent writing of `rms`; but
1553this is probably too much head-scratch for such a simple operation.
1554
1555ForeachSlot() can help in this situation. It is an alternative version of Foreach() for which the function takes an
1556additional "processing slot" parameter besides the columns it should be applied to. RDataFrame
1557guarantees that ForeachSlot() will invoke the user expression with different `slot` parameters for different concurrent
1558executions (see [Special helper columns: rdfentry_ and rdfslot_](\ref helper-cols) for more information on the slot parameter).
1559We can take advantage of ForeachSlot() to evaluate a thread-safe root mean square of column "x":
1560~~~{.cpp}
1561// Thread-safe evaluation of RMS of column "x" using ForeachSlot
1562ROOT::EnableImplicitMT();
1563const unsigned int nSlots = df.GetNSlots();
1564std::vector<double> sumSqs(nSlots, 0.);
1565std::vector<unsigned int> ns(nSlots, 0);
1566
1567df.ForeachSlot([&sumSqs, &ns](unsigned int slot, double x) { sumSqs[slot] += x*x; ns[slot] += 1; }, {"x"});
1568double sumSq = std::accumulate(sumSqs.begin(), sumSqs.end(), 0.); // sum all squares
1569unsigned int n = std::accumulate(ns.begin(), ns.end(), 0); // sum all counts
1570std::cout << "rms of x: " << std::sqrt(sumSq / n) << std::endl;
1571~~~
1572Notice how we created one `double` variable for each processing slot and later merged their results via `std::accumulate`.
1573
1574
1575\anchor friends
1576### Dataset joins with friend trees
1577
1578Vertically concatenating multiple trees that have the same columns (creating a logical dataset with the same columns and
1579more rows) is trivial in RDataFrame: just pass the tree name and a list of file names to RDataFrame's constructor, or create a TChain
1580out of the desired trees and pass that to RDataFrame.
1581
1582Horizontal concatenations of trees or chains (creating a logical dataset with the same number of rows and the union of the
1583columns of multiple trees) leverages TTree's "friend" mechanism.
1584
1585Simple joins of trees that do not have the same number of rows are also possible with indexed friend trees (see below).
1586
1587To use friend trees in RDataFrame, set up trees with the appropriate relationships and then instantiate an RDataFrame
1588with the main tree:
1589
1590~~~{.cpp}
1591TTree main([...]);
1592TTree friend([...]);
1593main.AddFriend(&friend, "myFriend");
1594
1595RDataFrame df(main);
1596auto df2 = df.Filter("myFriend.MyCol == 42");
1597~~~
1598
1599The same applies for TChains. Columns coming from the friend trees can be referred to by their full name, like in the example above,
1600or the friend tree name can be omitted in case the column name is not ambiguous (e.g. "MyCol" could be used instead of
1601"myFriend.MyCol" in the example above if there is no column "MyCol" in the main tree).
1602
1603\note A common source of confusion is that trees that are written out from a multi-thread Snapshot() call will have their
1604 entries (block-wise) shuffled with respect to the original tree. Such trees cannot be used as friends of the original
1605 one: rows will be mismatched.
1606
1607Indexed friend trees provide a way to perform simple joins of multiple trees over a common column.
1608When a certain entry in the main tree (or chain) is loaded, the friend trees (or chains) will then load an entry where the
1609"index" columns have a value identical to the one in the main one. For example, in Python:
1610
1611~~~{.py}
1612main_tree = ...
1613aux_tree = ...
1614
1615# If a friend tree has an index on `commonColumn`, when the main tree loads
1616# a given row, it also loads the row of the friend tree that has the same
1617# value of `commonColumn`
1618aux_tree.BuildIndex("commonColumn")
1619
1620mainTree.AddFriend(aux_tree)
1621
1622df = ROOT.RDataFrame(mainTree)
1623~~~
1624
1625RDataFrame supports indexed friend TTrees from ROOT v6.24 in single-thread mode and from v6.28/02 in multi-thread mode.
1626
1627\anchor other-file-formats
1628### Reading data formats other than ROOT trees
1629RDataFrame can be interfaced with RDataSources. The ROOT::RDF::RDataSource interface defines an API that RDataFrame can use to read arbitrary columnar data formats.
1630
1631RDataFrame calls into concrete RDataSource implementations to retrieve information about the data, retrieve (thread-local) readers or "cursors" for selected columns
1632and to advance the readers to the desired data entry.
1633Some predefined RDataSources are natively provided by ROOT such as the ROOT::RDF::RCsvDS which allows to read comma separated files:
1634~~~{.cpp}
1635auto tdf = ROOT::RDF::FromCSV("MuRun2010B.csv");
1636auto filteredEvents =
1637 tdf.Filter("Q1 * Q2 == -1")
1638 .Define("m", "sqrt(pow(E1 + E2, 2) - (pow(px1 + px2, 2) + pow(py1 + py2, 2) + pow(pz1 + pz2, 2)))");
1639auto h = filteredEvents.Histo1D("m");
1640h->Draw();
1641~~~
1642
1643See also FromNumpy (Python-only), FromRNTuple(), FromArrow(), FromSqlite().
1644
1645\anchor callgraphs
1646### Computation graphs (storing and reusing sets of transformations)
1647
1648As we saw, transformed dataframes can be stored as variables and reused multiple times to create modified versions of the dataset. This implicitly defines a **computation graph** in which
1649several paths of filtering/creation of columns are executed simultaneously, and finally aggregated results are produced.
1650
1651RDataFrame detects when several actions use the same filter or the same defined column, and **only evaluates each
1652filter or defined column once per event**, regardless of how many times that result is used down the computation graph.
1653Objects read from each column are **built once and never copied**, for maximum efficiency.
1654When "upstream" filters are not passed, subsequent filters, temporary column expressions and actions are not evaluated,
1655so it might be advisable to put the strictest filters first in the graph.
1656
1657\anchor representgraph
1658### Visualizing the computation graph
1659It is possible to print the computation graph from any node to obtain a [DOT (graphviz)](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) representation either on the standard output
1660or in a file.
1661
1662Invoking the function ROOT::RDF::SaveGraph() on any node that is not the head node, the computation graph of the branch
1663the node belongs to is printed. By using the head node, the entire computation graph is printed.
1664
1665Following there is an example of usage:
1666~~~{.cpp}
1667// First, a sample computational graph is built
1668ROOT::RDataFrame df("tree", "f.root");
1669
1670auto df2 = df.Define("x", []() { return 1; })
1671 .Filter("col0 % 1 == col0")
1672 .Filter([](int b1) { return b1 <2; }, {"cut1"})
1673 .Define("y", []() { return 1; });
1674
1675auto count = df2.Count();
1676
1677// Prints the graph to the rd1.dot file in the current directory
1678ROOT::RDF::SaveGraph(df, "./mydot.dot");
1679// Prints the graph to standard output
1680ROOT::RDF::SaveGraph(df);
1681~~~
1682
1683The generated graph can be rendered using one of the graphviz filters, e.g. `dot`. For instance, the image below can be generated with the following command:
1684~~~{.sh}
1685$ dot -Tpng computation_graph.dot -ocomputation_graph.png
1686~~~
1687
1688\image html RDF_Graph2.png
1689
1690\anchor rdf-logging
1691### Activating RDataFrame execution logs
1692
1693RDataFrame has experimental support for verbose logging of the event loop runtimes and other interesting related information. It is activated as follows:
1694~~~{.cpp}
1695#include <ROOT/RLogger.hxx>
1696
1697// this increases RDF's verbosity level as long as the `verbosity` variable is in scope
1698auto verbosity = ROOT::RLogScopedVerbosity(ROOT::Detail::RDF::RDFLogChannel(), ROOT::ELogLevel::kInfo);
1699~~~
1700
1701or in Python:
1702~~~{.python}
1703import ROOT
1704
1705verbosity = ROOT.RLogScopedVerbosity(ROOT.Detail.RDF.RDFLogChannel(), ROOT.ELogLevel.kInfo)
1706~~~
1707
1708More information (e.g. start and end of each multi-thread task) is printed using `ELogLevel.kDebug` and even more
1709(e.g. a full dump of the generated code that RDataFrame just-in-time-compiles) using `ELogLevel.kDebug+10`.
1710
1711\anchor rdf-from-spec
1712### Creating an RDataFrame from a dataset specification file
1713
1714RDataFrame can be created using a dataset specification JSON file:
1715
1716~~~{.python}
1717import ROOT
1718
1719df = ROOT.RDF.Experimental.FromSpec("spec.json")
1720~~~
1721
1722The input dataset specification JSON file needs to be provided by the user and it describes all necessary samples and
1723their associated metadata information. The main required key is the "samples" (at least one sample is needed) and the
1724required sub-keys for each sample are "trees" and "files". Additionally, one can specify a metadata dictionary for each
1725sample in the "metadata" key.
1726
1727A simple example for the formatting of the specification in the JSON file is the following:
1728
1729~~~{.cpp}
1730{
1731 "samples": {
1732 "sampleA": {
1733 "trees": ["tree1", "tree2"],
1734 "files": ["file1.root", "file2.root"],
1735 "metadata": {
1736 "lumi": 10000.0,
1737 "xsec": 1.0,
1738 "sample_category" = "data"
1739 }
1740 },
1741 "sampleB": {
1742 "trees": ["tree3", "tree4"],
1743 "files": ["file3.root", "file4.root"],
1744 "metadata": {
1745 "lumi": 0.5,
1746 "xsec": 1.5,
1747 "sample_category" = "MC_background"
1748 }
1749 }
1750 }
1751}
1752~~~
1753
1754The metadata information from the specification file can be then accessed using the DefinePerSample function.
1755For example, to access luminosity information (stored as a double):
1756
1757~~~{.python}
1758df.DefinePerSample("lumi", 'rdfsampleinfo_.GetD("lumi")')
1759~~~
1760
1761or sample_category information (stored as a string):
1762
1763~~~{.python}
1764df.DefinePerSample("sample_category", 'rdfsampleinfo_.GetS("sample_category")')
1765~~~
1766
1767or directly the filename:
1768
1769~~~{.python}
1770df.DefinePerSample("name", "rdfsampleinfo_.GetSampleName()")
1771~~~
1772
1773An example implementation of the "FromSpec" method is available in tutorial: df106_HiggstoFourLeptons.py, which also
1774provides a corresponding exemplary JSON file for the dataset specification.
1775
1776\anchor progressbar
1777### Adding a progress bar
1778
1779A progress bar showing the processed event statistics can be added to any RDataFrame program.
1780The event statistics include elapsed time, currently processed file, currently processed events, the rate of event processing
1781and an estimated remaining time (per file being processed). It is recorded and printed in the terminal every m events and every
1782n seconds (by default m = 1000 and n = 1). The ProgressBar can be also added when the multithread (MT) mode is enabled.
1783
1784ProgressBar is added after creating the dataframe object (df):
1785~~~{.cpp}
1786ROOT::RDataFrame df("tree", "file.root");
1787ROOT::RDF::Experimental::AddProgressBar(df);
1788~~~
1789
1790Alternatively, RDataFrame can be cast to an RNode first, giving the user more flexibility
1791For example, it can be called at any computational node, such as Filter or Define, not only the head node,
1792with no change to the ProgressBar function itself (please see the [Python interface](classROOT_1_1RDataFrame.html#python)
1793section for appropriate usage in Python):
1794~~~{.cpp}
1795ROOT::RDataFrame df("tree", "file.root");
1796auto df_1 = ROOT::RDF::RNode(df.Filter("x>1"));
1797ROOT::RDF::Experimental::AddProgressBar(df_1);
1798~~~
1799Examples of implemented progress bars can be seen by running [Higgs to Four Lepton tutorial](https://root.cern/doc/master/df106__HiggsToFourLeptons_8py_source.html) and [Dimuon tutorial](https://root.cern/doc/master/df102__NanoAODDimuonAnalysis_8C.html).
1800
1801\anchor missing-values
1802### Working with missing values in the dataset
1803
1804In certain situations a dataset might be missing one or more values at one or
1805more of its entries. For example:
1806
1807- If the dataset is composed of multiple files and one or more files is
1808 missing one or more columns required by the analysis.
1809- When joining different datasets horizontally according to some index value
1810 (e.g. the event number), if the index does not find a match in one or more
1811 other datasets for a certain entry.
1812- If, for a certain event, a column is invalid because it results from a Snapshot
1813 with systematic variations, and that variation didn't pass its filters. For
1814 more details, see \ref snapshot-with-variations.
1815
1816For example, suppose that column "y" does not have a value for entry 42:
1817
1818\code{.unparsed}
1819+-------+---+---+
1820| Entry | x | y |
1821+-------+---+---+
1822| 42 | 1 | |
1823+-------+---+---+
1824\endcode
1825
1826If the RDataFrame application reads that column, for example if a Take() action
1827was requested, the default behaviour is to throw an exception indicating
1828that that column is missing an entry.
1829
1830The following paragraphs discuss the functionalities provided by RDataFrame to
1831work with missing values in the dataset.
1832
1833#### FilterAvailable and FilterMissing
1834
1835FilterAvailable and FilterMissing are specialized RDataFrame Filter operations.
1836They take as input argument the name of a column of the dataset to watch for
1837missing values. Like Filter, they will either keep or discard an entire entry
1838based on whether a condition returns true or false. Specifically:
1839
1840- FilterAvailable: the condition is whether the value of the column is present.
1841 If so, the entry is kept. Otherwise if the value is missing the entry is
1842 discarded.
1843- FilterMissing: the condition is whether the value of the column is missing. If
1844 so, the entry is kept. Otherwise if the value is present the entry is
1845 discarded.
1846
1847\code{.py}
1848df = ROOT.RDataFrame(dataset)
1849
1850# Anytime an entry from "col" is missing, the entire entry will be filtered out
1851df_available = df.FilterAvailable("col")
1852df_available = df_available.Define("twice", "col * 2")
1853
1854# Conversely, if we want to select the entries for which the column has missing
1855# values, we do the following
1856df_missingcol = df.FilterMissing("col")
1857# Following operations in the same branch of the computation graph clearly
1858# cannot access that same column, since there would be no value to read
1859df_missingcol = df_missingcol.Define("observable", "othercolumn * 2")
1860\endcode
1861
1862\code{.cpp}
1863ROOT::RDataFrame df{dataset};
1864
1865// Anytime an entry from "col" is missing, the entire entry will be filtered out
1866auto df_available = df.FilterAvailable("col");
1867auto df_twicecol = df_available.Define("twice", "col * 2");
1868
1869// Conversely, if we want to select the entries for which the column has missing
1870// values, we do the following
1871auto df_missingcol = df.FilterMissing("col");
1872// Following operations in the same branch of the computation graph clearly
1873// cannot access that same column, since there would be no value to read
1874auto df_observable = df_missingcol.Define("observable", "othercolumn * 2");
1875\endcode
1876
1877#### DefaultValueFor
1878
1879DefaultValueFor creates a node of the computation graph which just forwards the
1880values of the columns necessary for other downstream nodes, when they are
1881available. In case a value of the input column passed to this function is not
1882available, the node will provide the default value passed to this function call
1883instead. Example:
1884
1885\code{.py}
1886df = ROOT.RDataFrame(dataset)
1887# Anytime an entry from "col" is missing, the value will be the default one
1888default_value = ... # Some sensible default value here
1889df = df.DefaultValueFor("col", default_value)
1890df = df.Define("twice", "col * 2")
1891\endcode
1892
1893\code{.cpp}
1894ROOT::RDataFrame df{dataset};
1895// Anytime an entry from "col" is missing, the value will be the default one
1896constexpr auto default_value = ... // Some sensible default value here
1897auto df_default = df.DefaultValueFor("col", default_value);
1898auto df_col = df_default.Define("twice", "col * 2");
1899\endcode
1900
1901#### Mixing different strategies to work with missing values in the same RDataFrame
1902
1903All the operations presented above only act on the particular branch of the
1904computation graph where they are called, so that different results can be
1905obtained by mixing and matching the filtering or providing a default value
1906strategies:
1907
1908\code{.py}
1909df = ROOT.RDataFrame(dataset)
1910# Anytime an entry from "col" is missing, the value will be the default one
1911default_value = ... # Some sensible default value here
1912df_default = df.DefaultValueFor("col", default_value).Define("twice", "col * 2")
1913df_filtered = df.FilterAvailable("col").Define("twice", "col * 2")
1914
1915# Same number of total entries as the input dataset, with defaulted values
1916df_default.Display(["twice"]).Print()
1917# Only keep the entries where "col" has values
1918df_filtered.Display(["twice"]).Print()
1919\endcode
1920
1921\code{.cpp}
1922ROOT::RDataFrame df{dataset};
1923
1924// Anytime an entry from "col" is missing, the value will be the default one
1925constexpr auto default_value = ... // Some sensible default value here
1926auto df_default = df.DefaultValueFor("col", default_value).Define("twice", "col * 2");
1927auto df_filtered = df.FilterAvailable("col").Define("twice", "col * 2");
1928
1929// Same number of total entries as the input dataset, with defaulted values
1930df_default.Display({"twice"})->Print();
1931// Only keep the entries where "col" has values
1932df_filtered.Display({"twice"})->Print();
1933\endcode
1934
1935#### Further considerations
1936
1937Note that working with missing values is currently supported with a TTree-based
1938data source. Support of this functionality for other data sources may come in
1939the future.
1940
1941\anchor special-values
1942### Dealing with NaN or Inf values in the dataset
1943
1944RDataFrame does not treat NaNs or infinities beyond what the floating-point standards require, i.e. they will
1945propagate to the final result.
1946Non-finite numbers can be suppressed using Filter(), e.g.:
1947
1948\code{.py}
1949df.Filter("std::isfinite(x)").Mean("x")
1950\endcode
1951
1952\anchor rosetta-stone
1953### Translating TTree commands to RDataFrame
1954
1955<table>
1956<tr>
1957 <td>
1958 <b>TTree::Draw</b>
1959 </td>
1960 <td>
1961 <b>ROOT::RDataFrame</b>
1962 </td>
1963</tr>
1964<tr>
1965 <td>
1966~~~{.cpp}
1967// Get the tree and Draw a histogram of x for selected y values
1968auto *tree = file->Get<TTree>("myTree");
1969tree->Draw("x", "y > 2");
1970~~~
1971 </td>
1972 <td>
1973~~~{.cpp}
1974ROOT::RDataFrame df("myTree", file);
1975df.Filter("y > 2").Histo1D("x")->Draw();
1976~~~
1977 </td>
1978</tr>
1979<tr>
1980 <td>
1981~~~{.cpp}
1982// Draw a histogram of "jet_eta" with the desired weight
1983tree->Draw("jet_eta", "weight*(event == 1)");
1984~~~
1985 </td>
1986 <td>
1987~~~{.cpp}
1988df.Filter("event == 1").Histo1D("jet_eta", "weight")->Draw();
1989~~~
1990 </td>
1991</tr>
1992<tr>
1993 <td>
1994~~~{cpp}
1995// Draw a histogram filled with values resulting from calling a method of the class of the `event` branch in the TTree.
1996tree->Draw("event.GetNtrack()");
1997
1998~~~
1999 </td>
2000 <td>
2001~~~{cpp}
2002df.Define("NTrack","event.GetNtrack()").Histo1D("NTrack")->Draw();
2003~~~
2004 </td>
2005</tr>
2006<tr>
2007 <td>
2008~~~{cpp}
2009// Draw only every 10th event
2010tree->Draw("fNtrack","fEvtHdr.fEvtNum%10 == 0");
2011~~~
2012 </td>
2013 <td>
2014~~~{cpp}
2015// Use the Filter operation together with the special RDF column: `rdfentry_`
2016df.Filter("rdfentry_ % 10 == 0").Histo1D("fNtrack")->Draw();
2017~~~
2018 </td>
2019</tr>
2020<tr>
2021 <td>
2022~~~{cpp}
2023// object selection: for each event, fill histogram with array of selected pts
2024tree->Draw('Muon_pt', 'Muon_pt > 100');
2025~~~
2026 </td>
2027 <td>
2028~~~{cpp}
2029// with RDF, arrays are read as ROOT::VecOps::RVec objects
2030df.Define("good_pt", "Muon_pt[Muon_pt > 100]").Histo1D("good_pt")->Draw();
2031~~~
2032 </td>
2033</tr>
2034</tr>
2035<tr>
2036 <td>
2037~~~{cpp}
2038// Draw the histogram and fill hnew with it
2039tree->Draw("sqrt(x)>>hnew","y>0");
2040
2041// Retrieve hnew from the current directory
2042auto hnew = gDirectory->Get<TH1F>("hnew");
2043~~~
2044 </td>
2045 <td>
2046~~~{cpp}
2047// We pass histogram constructor arguments to the Histo1D operation, to easily give the histogram a name
2048auto hist = df.Define("sqrt_x", "sqrt(x)").Filter("y>0").Histo1D({"hnew","hnew", 10, 0, 10}, "sqrt_x");
2049~~~
2050 </td>
2051</tr>
2052<tr>
2053 <td>
2054~~~{cpp}
2055// Draw a 1D Profile histogram instead of TH2F
2056tree->Draw("y:x","","prof");
2057
2058// Draw a 2D Profile histogram instead of TH3F
2059tree->Draw("z:y:x","","prof");
2060~~~
2061 </td>
2062 <td>
2063~~~{cpp}
2064
2065// Draw a 1D Profile histogram
2066df.Profile1D("x", "y")->Draw();
2067
2068// Draw a 2D Profile histogram
2069df.Profile2D("x", "y", "z")->Draw();
2070~~~
2071 </td>
2072</tr>
2073<tr>
2074 <td>
2075~~~{cpp}
2076// This command draws 2 entries starting with entry 5
2077tree->Draw("x", "","", 2, 5);
2078~~~
2079 </td>
2080 <td>
2081~~~{cpp}
2082// Range function with arguments begin, end
2083df.Range(5,7).Histo1D("x")->Draw();
2084~~~
2085 </td>
2086</tr>
2087<tr>
2088 <td>
2089~~~{cpp}
2090// Draw the X() component of the
2091// ROOT::Math::DisplacementVector3D in vec_list
2092tree->Draw("vec_list.X()");
2093~~~
2094 </td>
2095 <td>
2096~~~{cpp}
2097df.Define("x", "ROOT::RVecD out; for(const auto &el: vec_list) out.push_back(el.X()); return out;").Histo1D("x")->Draw();
2098~~~
2099 </td>
2100</tr>
2101<tr>
2102 <td>
2103~~~{cpp}
2104// Gather all values from a branch holding a collection per event, `pt`,
2105// and fill a histogram so that we can count the total number of values across all events
2106tree->Draw("pt>>histo");
2107auto histo = gDirectory->Get<TH1D>("histo");
2108histo->GetEntries();
2109~~~
2110 </td>
2111 <td>
2112~~~{cpp}
2113df.Histo1D("pt")->GetEntries();
2114~~~
2115 </td>
2116</tr>
2117<tr>
2118 <td>
2119 <b>TTree::Scan()</b>
2120 </td>
2121 <td>
2122 <b>ROOT::RDataFrame</b>
2123 </td>
2124</tr>
2125<tr>
2126 <td>
2127~~~{cpp}
2128// Print a table of the first 10 entries for all variables in the Tree
2129// if the first entry in the Muon_pt collection is > 10.
2130tree->Scan("*", "Muon_pt[0] > 10.", "", 10);
2131~~~
2132 </td>
2133 <td>
2134~~~{cpp}
2135// Selecting columns using a regular expression
2136df.Filter("Muon_pt[0] > 10.").Display(".*", 10)->Print();
2137~~~
2138 </td>
2139</tr>
2140<tr>
2141 <td>
2142~~~{cpp}
2143// For 10 events, print Muon_pt and Muon_eta, starting at entry 100
2144tree->Scan("Muon_pt:Muon_eta", "", "", 10, 100);
2145~~~
2146 </td>
2147 <td>
2148~~~{cpp}
2149// Selecting columns using a collection of names
2150df.Range(100, 0).Display({"Muon_pt", "Muon_eta"}, 10)->Print();
2151~~~
2152 </td>
2153</tr>
2154</table>
2155
2156*/
2157// clang-format on
2158
2159namespace ROOT {
2160
2162using ColumnNamesPtr_t = std::shared_ptr<const ColumnNames_t>;
2163
2164////////////////////////////////////////////////////////////////////////////
2165/// \brief Build the dataframe.
2166/// \param[in] treeName Name of the tree contained in the directory
2167/// \param[in] dirPtr TDirectory where the tree is stored, e.g. a TFile.
2168/// \param[in] defaultColumns Collection of default columns.
2169///
2170/// The default columns are looked at in case no column is specified in the
2171/// booking of actions or transformations.
2172/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2173RDataFrame::RDataFrame(std::string_view treeName, TDirectory *dirPtr, const ColumnNames_t &defaultColumns)
2174 : RInterface(std::make_shared<RDFDetail::RLoopManager>(
2175 std::make_unique<ROOT::Internal::RDF::RTTreeDS>(treeName, dirPtr), defaultColumns))
2176{
2177}
2178
2179////////////////////////////////////////////////////////////////////////////
2180/// \brief Build the dataframe.
2181/// \param[in] treeName Name of the tree contained in the directory
2182/// \param[in] fileNameGlob TDirectory where the tree is stored, e.g. a TFile.
2183/// \param[in] defaultColumns Collection of default columns.
2184///
2185/// The filename glob supports the same type of expressions as TChain::Add(), and it is passed as-is to TChain's
2186/// constructor.
2187///
2188/// The default columns are looked at in case no column is specified in the
2189/// booking of actions or transformations.
2190/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2191RDataFrame::RDataFrame(std::string_view treeName, std::string_view fileNameGlob, const ColumnNames_t &defaultColumns)
2192 : RInterface(ROOT::Detail::RDF::CreateLMFromFile(treeName, fileNameGlob, defaultColumns))
2193{
2194}
2195
2196////////////////////////////////////////////////////////////////////////////
2197/// \brief Build the dataframe.
2198/// \param[in] datasetName Name of the dataset contained in the directory
2199/// \param[in] fileNameGlobs Collection of file names of filename globs
2200/// \param[in] defaultColumns Collection of default columns.
2201///
2202/// The filename globs support the same type of expressions as TChain::Add(), and each glob is passed as-is
2203/// to TChain's constructor.
2204///
2205/// The default columns are looked at in case no column is specified in the booking of actions or transformations.
2206/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2207RDataFrame::RDataFrame(std::string_view datasetName, const std::vector<std::string> &fileNameGlobs,
2209 : RInterface(ROOT::Detail::RDF::CreateLMFromFile(datasetName, fileNameGlobs, defaultColumns))
2210{
2211}
2212
2213////////////////////////////////////////////////////////////////////////////
2214/// \brief Build the dataframe.
2215/// \param[in] tree The tree or chain to be studied.
2216/// \param[in] defaultColumns Collection of default column names to fall back to when none is specified.
2217///
2218/// The default columns are looked at in case no column is specified in the
2219/// booking of actions or transformations.
2220/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2225
2226//////////////////////////////////////////////////////////////////////////
2227/// \brief Build a dataframe that generates numEntries entries.
2228/// \param[in] numEntries The number of entries to generate.
2229///
2230/// An empty-source dataframe constructed with a number of entries will
2231/// generate those entries on the fly when some action is triggered,
2232/// and it will do so for all the previously-defined columns.
2233/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2235 : RInterface(std::make_shared<RDFDetail::RLoopManager>(numEntries))
2236
2237{
2238}
2239
2240//////////////////////////////////////////////////////////////////////////
2241/// \brief Build dataframe associated to data source.
2242/// \param[in] ds The data source object.
2243/// \param[in] defaultColumns Collection of default column names to fall back to when none is specified.
2244///
2245/// A dataframe associated to a data source will query it to access column values.
2246/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2247RDataFrame::RDataFrame(std::unique_ptr<ROOT::RDF::RDataSource> ds, const ColumnNames_t &defaultColumns)
2248 : RInterface(std::make_shared<RDFDetail::RLoopManager>(std::move(ds), defaultColumns))
2249{
2250}
2251
2252//////////////////////////////////////////////////////////////////////////
2253/// \brief Build dataframe from an RDatasetSpec object.
2254/// \param[in] spec The dataset specification object.
2255///
2256/// A dataset specification includes trees and file names,
2257/// as well as an optional friend list and/or entry range.
2258///
2259/// ### Example usage from Python:
2260/// ~~~{.py}
2261/// spec = (
2262/// ROOT.RDF.Experimental.RDatasetSpec()
2263/// .AddSample(("data", "tree", "file.root"))
2264/// .WithGlobalFriends("friendTree", "friend.root", "alias")
2265/// .WithGlobalRange((100, 200))
2266/// )
2267/// df = ROOT.RDataFrame(spec)
2268/// ~~~
2269///
2270/// See also ROOT::RDataFrame::FromSpec().
2275
2276namespace RDF {
2277namespace Experimental {
2278
2279////////////////////////////////////////////////////////////////////////////
2280/// \brief Create the RDataFrame from the dataset specification file.
2281/// \param[in] jsonFile Path to the dataset specification JSON file.
2282///
2283/// The input dataset specification JSON file must include a number of keys that
2284/// describe all the necessary samples and their associated metadata information.
2285/// The main key, "samples", is required and at least one sample is needed. Each
2286/// sample must have at least one key "trees" and at least one key "files" from
2287/// which the data is read. Optionally, one or more metadata information can be
2288/// added, as well as the friend list information.
2289///
2290/// ### Example specification file JSON:
2291/// The following is an example of the dataset specification JSON file formatting:
2292///~~~{.cpp}
2293/// {
2294/// "samples": {
2295/// "sampleA": {
2296/// "trees": ["tree1", "tree2"],
2297/// "files": ["file1.root", "file2.root"],
2298/// "metadata": {"lumi": 1.0, }
2299/// },
2300/// "sampleB": {
2301/// "trees": ["tree3", "tree4"],
2302/// "files": ["file3.root", "file4.root"],
2303/// "metadata": {"lumi": 0.5, }
2304/// },
2305/// ...
2306/// },
2307/// }
2308///~~~
2313
2314} // namespace Experimental
2315} // namespace RDF
2316
2317} // namespace ROOT
2318
2319namespace cling {
2320//////////////////////////////////////////////////////////////////////////
2321/// Print an RDataFrame at the prompt
2322std::string printValue(ROOT::RDataFrame *df)
2323{
2324 // The loop manager is never null, except when its construction failed.
2325 // This can happen e.g. if the constructor of RLoopManager that expects
2326 // a file name is used and that file doesn't exist. This point is usually
2327 // not even reached in that situation, since the exception thrown by the
2328 // constructor will also stop execution of the program. But it can still
2329 // be reached at the prompt, if the user tries to print the RDataFrame
2330 // variable after an incomplete initialization.
2331 auto *lm = df->GetLoopManager();
2332 if (!lm) {
2333 throw std::runtime_error("Cannot print information about this RDataFrame, "
2334 "it was not properly created. It must be discarded.");
2335 }
2336 auto defCols = lm->GetDefaultColumnNames();
2337
2338 std::ostringstream ret;
2339 if (auto ds = df->GetDataSource()) {
2340 ret << "A data frame associated to the data source \"" << cling::printValue(ds) << "\"";
2341 } else {
2342 ret << "An empty data frame that will create " << lm->GetNEmptyEntries() << " entries\n";
2343 }
2344
2345 return ret.str();
2346}
2347} // namespace cling
Basic types used by ROOT and required by TInterpreter.
unsigned long long ULong64_t
Portable unsigned long integer 8 bytes.
Definition RtypesCore.h:84
ROOT::Detail::TRangeCast< T, true > TRangeDynCast
TRangeDynCast is an adapter class that allows the typed iteration through a TCollection.
The head node of a RDF computation graph.
The dataset specification for RDataFrame.
ROOT's RDataFrame offers a modern, high-level interface for analysis of data stored in TTree ,...
RDataFrame(std::string_view treeName, std::string_view filenameglob, const ColumnNames_t &defaultColumns={})
Build the dataframe.
ROOT::RDF::ColumnNames_t ColumnNames_t
Describe directory structure in memory.
Definition TDirectory.h:45
A TTree represents a columnar dataset.
Definition TTree.h:89
ROOT::RDF::Experimental::RDatasetSpec RetrieveSpecFromJson(const std::string &jsonFile)
Function to retrieve RDatasetSpec from JSON file provided.
Definition RDFUtils.cxx:563
ROOT::RDataFrame FromSpec(const std::string &jsonFile)
Factory method to create an RDataFrame from a JSON specification file.
std::vector< std::string > ColumnNames_t
std::shared_ptr< const ColumnNames_t > ColumnNamesPtr_t