Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
RDataFrame.cxx
Go to the documentation of this file.
1// Author: Enrico Guiraud, Danilo Piparo CERN 12/2016
2
3/*************************************************************************
4 * Copyright (C) 1995-2018, Rene Brun and Fons Rademakers. *
5 * All rights reserved. *
6 * *
7 * For the licensing terms see $ROOTSYS/LICENSE. *
8 * For the list of contributors see $ROOTSYS/README/CREDITS. *
9 *************************************************************************/
10
12#include "ROOT/RDataFrame.hxx"
13#include "ROOT/RDataSource.hxx"
14#include "ROOT/RTTreeDS.hxx"
18#include "ROOT/RDF/Utils.hxx"
19#include <string_view>
20#include "TChain.h"
21#include "TDirectory.h"
22#include "RtypesCore.h" // for ULong64_t
23#include "TTree.h"
24
25#include <fstream> // std::ifstream
26#include <nlohmann/json.hpp> // nlohmann::json::parse
27#include <memory> // for make_shared, allocator, shared_ptr
28#include <ostream> // ostringstream
29#include <stdexcept>
30#include <string>
31#include <vector>
32
33// clang-format off
34/**
35* \class ROOT::RDataFrame
36* \ingroup dataframe
37* \brief ROOT's RDataFrame offers a modern, high-level interface for analysis of data stored in TTree , CSV and other data formats, in C++ or Python.
38
39In addition, multi-threading and other low-level optimisations allow users to exploit all the resources available
40on their machines completely transparently.<br>
41Skip to the [class reference](#reference) or keep reading for the user guide.
42
43In a nutshell:
44~~~{.cpp}
45ROOT::EnableImplicitMT(); // Tell ROOT you want to go parallel
46ROOT::RDataFrame d("myTree", "file_*.root"); // Interface to TTree and TChain
47auto myHisto = d.Histo1D("Branch_A"); // This books the (lazy) filling of a histogram
48myHisto->Draw(); // Event loop is run here, upon first access to a result
49~~~
50
51Calculations are expressed in terms of a type-safe *functional chain of actions and transformations*, RDataFrame takes
52care of their execution. The implementation automatically puts in place several low level optimisations such as
53multi-thread parallelization and caching.
54
55\htmlonly
56<a href="https://doi.org/10.5281/zenodo.260230"><img src="https://zenodo.org/badge/DOI/10.5281/zenodo.260230.svg"
57alt="DOI"></a>
58\endhtmlonly
59
60## For the impatient user
61You can directly see RDataFrame in action in our [tutorials](https://root.cern/doc/master/group__tutorial__dataframe.html), in C++ or Python.
62
63## Table of Contents
64- [Cheat sheet](\ref cheatsheet)
65- [Introduction](\ref rdf_intro)
66- [Crash course](\ref crash-course)
67- [Working with collections](\ref working_with_collections)
68- [Transformations: manipulating data](\ref transformations)
69- [Actions: getting results](\ref actions)
70- [Distributed execution in Python](\ref rdf_distrdf)
71- [Performance tips and parallel execution](\ref parallel-execution)
72- [More features](\ref more-features)
73 - [Systematic variations](\ref systematics)
74 - [RDataFrame objects as function arguments and return values](\ref rnode)
75 - [Storing RDataFrame objects in collections](\ref RDFCollections)
76 - [Executing callbacks every N events](\ref callbacks)
77 - [Default column lists](\ref default-branches)
78 - [Special helper columns: `rdfentry_` and `rdfslot_`](\ref helper-cols)
79 - [Just-in-time compilation: column type inference and explicit declaration of column types](\ref jitting)
80 - [User-defined custom actions](\ref generic-actions)
81 - [Dataset joins with friend trees](\ref friends)
82 - [Reading data formats other than ROOT trees](\ref other-file-formats)
83 - [Computation graphs (storing and reusing sets of transformations)](\ref callgraphs)
84 - [Visualizing the computation graph](\ref representgraph)
85 - [Activating RDataFrame execution logs](\ref rdf-logging)
86 - [Creating an RDataFrame from a dataset specification file](\ref rdf-from-spec)
87 - [Adding a progress bar](\ref progressbar)
88 - [Working with missing values in the dataset](\ref missing-values)
89 - [Dealing with NaN or Inf values in the dataset](\ref special-values)
90 - [Translating TTree::Draw to RDataFrame](\ref rosetta-stone)
91- [Python interface](classROOT_1_1RDataFrame.html#python)
92- <a class="el" href="classROOT_1_1RDataFrame.html#reference" onclick="javascript:toggleInherit('pub_methods_classROOT_1_1RDF_1_1RInterface')">Class reference</a>
93
94\anchor cheatsheet
95## Cheat sheet
96These are the operations which can be performed with RDataFrame.
97
98### Transformations
99Transformations are a way to manipulate the data.
100
101| **Transformation** | **Description** |
102|------------------|--------------------|
103| Alias() | Introduce an alias for a particular column name. |
104| DefaultValueFor() | If the value of the input column is missing, provide a default value instead. |
105| Define() | Create a new column in the dataset. Example usages include adding a column that contains the invariant mass of a particle, or a selection of elements of an array (e.g. only the `pt`s of "good" muons). |
106| DefinePerSample() | Define a new column that is updated when the input sample changes, e.g. when switching tree being processed in a chain. |
107| DefineSlot() | Same as Define(), but the user-defined function must take an extra `unsigned int slot` as its first parameter. `slot` will take a different value, in the range [0, nThread-1], for each thread of execution. This is meant as a helper in writing thread-safe Define() transformations when using RDataFrame after ROOT::EnableImplicitMT(). DefineSlot() works just as well with single-thread execution: in that case `slot` will always be `0`. |
108| DefineSlotEntry() | Same as DefineSlot(), but the entry number is passed in addition to the slot number. This is meant as a helper in case the expression depends on the entry number. For details about entry numbers in multi-threaded runs, see [here](\ref helper-cols). |
109| Filter() | Filter rows based on user-defined conditions. |
110| FilterAvailable() | Specialized Filter. If the value of the input column is available, keep the entry, otherwise discard it. |
111| FilterMissing() | Specialized Filter. If the value of the input column is missing, keep the entry, otherwise discard it. |
112| Range() | Filter rows based on entry number (single-thread only). |
113| Redefine() | Overwrite the value and/or type of an existing column. See Define() for more information. |
114| RedefineSlot() | Overwrite the value and/or type of an existing column. See DefineSlot() for more information. |
115| RedefineSlotEntry() | Overwrite the value and/or type of an existing column. See DefineSlotEntry() for more information. |
116| Vary() | Register systematic variations for an existing column. Varied results are then extracted via VariationsFor(). |
117
118
119### Actions
120Actions aggregate data into a result. Each one is described in more detail in the reference guide.
121
122In the following, whenever we say an action "returns" something, we always mean it returns a smart pointer to it. Actions only act on events that pass all preceding filters.
123
124Lazy actions only trigger the event loop when one of the results is accessed for the first time, making it easy to
125produce many different results in one event loop. Instant actions trigger the event loop instantly.
126
127
128| **Lazy action** | **Description** |
129|------------------|-----------------|
130| Aggregate() | Execute a user-defined accumulation operation on the processed column values. |
131| Book() | Book execution of a custom action using a user-defined helper object. |
132| Cache() | Cache column values in memory. Custom columns can be cached as well, filtered entries are not cached. Users can specify which columns to save (default is all). |
133| Count() | Return the number of events processed. Useful e.g. to get a quick count of the number of events passing a Filter. |
134| Display() | Provides a printable representation of the dataset contents. The method returns a ROOT::RDF::RDisplay() instance which can print a tabular representation of the data or return it as a string. |
135| Fill() | Fill a user-defined object with the values of the specified columns, as if by calling `Obj.Fill(col1, col2, ...)`. |
136| Graph() | Fills a TGraph with the two columns provided. If multi-threading is enabled, the order of the points may not be the one expected, it is therefore suggested to sort if before drawing. |
137| GraphAsymmErrors() | Fills a TGraphAsymmErrors. Should be used for any type of graph with errors, including cases with errors on one of the axes only. If multi-threading is enabled, the order of the points may not be the one expected, it is therefore suggested to sort if before drawing. |
138| Histo1D(), Histo2D(), Histo3D() | Fill a one-, two-, three-dimensional histogram with the processed column values. |
139| HistoND() | Fill an N-dimensional histogram with the processed column values. |
140| HistoNSparseD() | Fill an N-dimensional sparse histogram with the processed column values. Memory is allocated only for non-empty bins. |
141| Max() | Return the maximum of processed column values. If the type of the column is inferred, the return type is `double`, the type of the column otherwise.|
142| Mean() | Return the mean of processed column values.|
143| Min() | Return the minimum of processed column values. If the type of the column is inferred, the return type is `double`, the type of the column otherwise.|
144| Profile1D(), Profile2D() | Fill a one- or two-dimensional profile with the column values that passed all filters. |
145| Reduce() | Reduce (e.g. sum, merge) entries using the function (lambda, functor...) passed as argument. The function must have signature `T(T,T)` where `T` is the type of the column. Return the final result of the reduction operation. An optional parameter allows initialization of the result object to non-default values. |
146| Report() | Obtain statistics on how many entries have been accepted and rejected by the filters. See the section on [named filters](#named-filters-and-cutflow-reports) for a more detailed explanation. The method returns a ROOT::RDF::RCutFlowReport instance which can be queried programmatically to get information about the effects of the individual cuts. |
147| Stats() | Return a TStatistic object filled with the input columns. |
148| StdDev() | Return the unbiased standard deviation of the processed column values. |
149| Sum() | Return the sum of the values in the column. If the type of the column is inferred, the return type is `double`, the type of the column otherwise. |
150| Take() | Extract a column from the dataset as a collection of values, e.g. a `std::vector<float>` for a column of type `float`. |
151
152| **Instant action** | **Description** |
153|---------------------|-----------------|
154| Foreach() | Execute a user-defined function on each entry. Users are responsible for the thread-safety of this callable when executing with implicit multi-threading enabled. |
155| ForeachSlot() | Same as Foreach(), but the user-defined function must take an extra `unsigned int slot` as its first parameter. `slot` will take a different value, `0` to `nThreads - 1`, for each thread of execution. This is meant as a helper in writing thread-safe Foreach() actions when using RDataFrame after ROOT::EnableImplicitMT(). ForeachSlot() works just as well with single-thread execution: in that case `slot` will always be `0`. |
156| Snapshot() | Write the processed dataset to disk, in a new TTree or RNTuple and TFile. Custom columns can be saved as well, filtered entries are not saved. Users can specify which columns to save (default is all nominal columns). Columns resulting from Vary() can be included by setting the corresponding flag in \ref RSnapshotOptions. Snapshot, by default, overwrites the output file if it already exists. Snapshot() can be made *lazy* setting the appropriate flag in the snapshot options.|
157
158
159### Queries
160
161These operations do not modify the dataframe or book computations but simply return information on the RDataFrame object.
162
163| **Operation** | **Description** |
164|---------------------|-----------------|
165| Describe() | Get useful information describing the dataframe, e.g. columns and their types. |
166| GetColumnNames() | Get the names of all the available columns of the dataset. |
167| GetColumnType() | Return the type of a given column as a string. |
168| GetColumnTypeNamesList() | Return the list of type names of columns in the dataset. |
169| GetDefinedColumnNames() | Get the names of all the defined columns. |
170| GetFilterNames() | Return the names of all filters in the computation graph. |
171| GetNRuns() | Return the number of event loops run by this RDataFrame instance so far. |
172| GetNSlots() | Return the number of processing slots that RDataFrame will use during the event loop (i.e. the concurrency level). |
173| SaveGraph() | Store the computation graph of an RDataFrame in [DOT format (graphviz)](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) for easy inspection. See the [relevant section](\ref representgraph) for details. |
174
175\anchor rdf_intro
176## Introduction
177Users define their analysis as a sequence of operations to be performed on the dataframe object; the framework
178takes care of the management of the loop over entries as well as low-level details such as I/O and parallelization.
179RDataFrame provides methods to perform most common operations required by ROOT analyses;
180at the same time, users can just as easily specify custom code that will be executed in the event loop.
181
182RDataFrame is built with a *modular* and *flexible* workflow in mind, summarised as follows:
183
1841. Construct a dataframe object by specifying a dataset. RDataFrame supports TTree as well as TChain, [CSV files](https://root.cern/doc/master/df014__CSVDataSource_8C.html), [SQLite files](https://root.cern/doc/master/df027__SQliteDependencyOverVersion_8C.html), [RNTuples](https://root.cern/doc/master/structROOT_1_1Experimental_1_1RNTuple.html), and it can be extended to custom data formats. From Python, [NumPy arrays can be imported into RDataFrame](https://root.cern/doc/master/df032__MakeNumpyDataFrame_8py.html) as well.
185
1862. Transform the dataframe by:
187
188 - [Applying filters](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#transformations). This selects only specific rows of the dataset.
189
190 - [Creating custom columns](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#transformations). Custom columns can, for example, contain the results of a computation that must be performed for every row of the dataset.
191
1923. [Produce results](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#actions). *Actions* are used to aggregate data into results. Most actions are *lazy*, i.e. they are not executed on the spot, but registered with RDataFrame and executed only when a result is accessed for the first time.
193
194Make sure to book all transformations and actions before you access the contents of any of the results. This lets RDataFrame accumulate work and then produce all results at the same time, upon first access to any of them.
195
196The following table shows how analyses based on TTreeReader and TTree::Draw() translate to RDataFrame. Follow the
197[crash course](#crash-course) to discover more idiomatic and flexible ways to express analyses with RDataFrame.
198<table>
199<tr>
200 <td>
201 <b>TTreeReader</b>
202 </td>
203 <td>
204 <b>ROOT::RDataFrame</b>
205 </td>
206</tr>
207<tr>
208 <td>
209~~~{.cpp}
210TTreeReader reader("myTree", file);
211TTreeReaderValue<A_t> a(reader, "A");
212TTreeReaderValue<B_t> b(reader, "B");
213TTreeReaderValue<C_t> c(reader, "C");
214while(reader.Next()) {
215 if(IsGoodEvent(*a, *b, *c))
216 DoStuff(*a, *b, *c);
217}
218~~~
219 </td>
220 <td>
221~~~{.cpp}
222ROOT::RDataFrame d("myTree", file, {"A", "B", "C"});
223d.Filter(IsGoodEvent).Foreach(DoStuff);
224~~~
225 </td>
226</tr>
227<tr>
228 <td>
229 <b>TTree::Draw</b>
230 </td>
231 <td>
232 <b>ROOT::RDataFrame</b>
233 </td>
234</tr>
235<tr>
236 <td>
237~~~{.cpp}
238auto *tree = file->Get<TTree>("myTree");
239tree->Draw("x", "y > 2");
240~~~
241 </td>
242 <td>
243~~~{.cpp}
244ROOT::RDataFrame df("myTree", file);
245auto h = df.Filter("y > 2").Histo1D("x");
246h->Draw()
247~~~
248 </td>
249</tr>
250<tr>
251 <td>
252~~~{.cpp}
253tree->Draw("jet_eta", "weight*(event == 1)");
254~~~
255 </td>
256 <td>
257~~~{.cpp}
258df.Filter("event == 1").Histo1D("jet_eta", "weight");
259// or the fully compiled version:
260df.Filter([] (ULong64_t e) { return e == 1; }, {"event"}).Histo1D<RVec<float>>("jet_eta", "weight");
261~~~
262 </td>
263</tr>
264<tr>
265 <td>
266~~~{cpp}
267// object selection: for each event, fill histogram with array of selected pts
268tree->Draw('Muon_pt', 'Muon_pt > 100');
269~~~
270 </td>
271 <td>
272~~~{cpp}
273// with RDF, arrays are read as ROOT::VecOps::RVec objects
274df.Define("good_pt", "Muon_pt[Muon_pt > 100]").Histo1D("good_pt")
275~~~
276 </td>
277</tr>
278</table>
279
280\anchor crash-course
281## Crash course
282All snippets of code presented in the crash course can be executed in the ROOT interpreter. Simply precede them with
283~~~{.cpp}
284using namespace ROOT; // RDataFrame's namespace
285~~~
286which is omitted for brevity. The terms "column" and "branch" are used interchangeably.
287
288### Creating an RDataFrame
289RDataFrame's constructor is where the user specifies the dataset and, optionally, a default set of columns that
290operations should work with. Here are the most common methods to construct an RDataFrame object:
291~~~{.cpp}
292// single file -- all constructors are equivalent
293TFile *f = TFile::Open("file.root");
294TTree *t = f.Get<TTree>("treeName");
295
296ROOT::RDataFrame d1("treeName", "file.root");
297ROOT::RDataFrame d2("treeName", f); // same as TTreeReader
298ROOT::RDataFrame d3(*t);
299
300// multiple files -- all constructors are equivalent
301TChain chain("myTree");
302chain.Add("file1.root");
303chain.Add("file2.root");
304
305ROOT::RDataFrame d4("myTree", {"file1.root", "file2.root"});
306std::vector<std::string> files = {"file1.root", "file2.root"};
307ROOT::RDataFrame d5("myTree", files);
308ROOT::RDataFrame d6("myTree", "file*.root"); // the glob is passed as-is to TChain's constructor
309ROOT::RDataFrame d7(chain);
310~~~
311Additionally, users can construct an RDataFrame with no data source by passing an integer number. This is the number of rows that
312will be generated by this RDataFrame.
313~~~{.cpp}
314ROOT::RDataFrame d(10); // a RDF with 10 entries (and no columns/branches, for now)
315d.Foreach([] { static int i = 0; std::cout << i++ << std::endl; }); // silly example usage: count to ten
316~~~
317This is useful to generate simple datasets on the fly: the contents of each event can be specified with Define() (explained below). For example, we have used this method to generate [Pythia](https://pythia.org/) events and write them to disk in parallel (with the Snapshot action).
318
319For data sources other than TTrees and TChains, RDataFrame objects are constructed using ad-hoc factory functions (see e.g. FromCSV(), FromSqlite(), FromArrow()):
320
321~~~{.cpp}
322auto df = ROOT::RDF::FromCSV("input.csv");
323// use df as usual
324~~~
325
326### Filling a histogram
327Let's now tackle a very common task, filling a histogram:
328~~~{.cpp}
329// Fill a TH1D with the "MET" branch
330ROOT::RDataFrame d("myTree", "file.root");
331auto h = d.Histo1D("MET");
332h->Draw();
333~~~
334The first line creates an RDataFrame associated to the TTree "myTree". This tree has a branch named "MET".
335
336Histo1D() is an *action*; it returns a smart pointer (a ROOT::RDF::RResultPtr, to be precise) to a TH1D histogram filled
337with the `MET` of all events. If the quantity stored in the column is a collection (e.g. a vector or an array), the
338histogram is filled with all vector elements for each event.
339
340You can use the objects returned by actions as if they were pointers to the desired results. There are many other
341possible [actions](\ref cheatsheet), and all their results are wrapped in smart pointers; we'll see why in a minute.
342
343### Applying a filter
344Let's say we want to cut over the value of branch "MET" and count how many events pass this cut. This is one way to do it:
345~~~{.cpp}
346ROOT::RDataFrame d("myTree", "file.root");
347auto c = d.Filter("MET > 4.").Count(); // computations booked, not run
348std::cout << *c << std::endl; // computations run here, upon first access to the result
349~~~
350The filter string (which must contain a valid C++ expression) is applied to the specified columns for each event;
351the name and types of the columns are inferred automatically. The string expression is required to return a `bool`
352which signals whether the event passes the filter (`true`) or not (`false`).
353
354You can think of your data as "flowing" through the chain of calls, being transformed, filtered and finally used to
355perform actions. Multiple Filter() calls can be chained one after another.
356
357Using string filters is nice for simple things, but they are limited to specifying the equivalent of a single return
358statement or the body of a lambda, so it's cumbersome to use strings with more complex filters. They also add a small
359runtime overhead, as ROOT needs to just-in-time compile the string into C++ code. When more freedom is required or
360runtime performance is very important, a C++ callable can be specified instead (a lambda in the following snippet,
361but it can be any kind of function or even a functor class), together with a list of column names.
362This snippet is analogous to the one above:
363~~~{.cpp}
364ROOT::RDataFrame d("myTree", "file.root");
365auto metCut = [](double x) { return x > 4.; }; // a C++11 lambda function checking "x > 4"
366auto c = d.Filter(metCut, {"MET"}).Count();
367std::cout << *c << std::endl;
368~~~
369
370An example of a more complex filter expressed as a string containing C++ code is shown below
371
372~~~{.cpp}
373ROOT::RDataFrame d("myTree", "file.root");
374auto df = d.Define("p", "std::array<double, 4> p{px, py, pz}; return p;")
375 .Filter("double p2 = 0.0; for (auto&& x : p) p2 += x*x; return sqrt(p2) < 10.0;");
376~~~
377
378The code snippet above defines a column `p` that is a fixed-size array using the component column names and then
379filters on its magnitude by looping over its elements. It must be noted that the usage of strings to define columns
380like the one above is currently the only possibility when using PyROOT. When writing expressions as such, only constants
381and data coming from other columns in the dataset can be involved in the code passed as a string. Local variables and
382functions cannot be used, since the interpreter will not know how to find them. When capturing local state is necessary,
383it must first be declared to the ROOT C++ interpreter.
384
385More information on filters and how to use them to automatically generate cutflow reports can be found [below](#Filters).
386
387### Defining custom columns
388Let's now consider the case in which "myTree" contains two quantities "x" and "y", but our analysis relies on a derived
389quantity `z = sqrt(x*x + y*y)`. Using the Define() transformation, we can create a new column in the dataset containing
390the variable "z":
391~~~{.cpp}
392ROOT::RDataFrame d("myTree", "file.root");
393auto sqrtSum = [](double x, double y) { return sqrt(x*x + y*y); };
394auto zMean = d.Define("z", sqrtSum, {"x","y"}).Mean("z");
395std::cout << *zMean << std::endl;
396~~~
397Define() creates the variable "z" by applying `sqrtSum` to "x" and "y". Later in the chain of calls we refer to
398variables created with Define() as if they were actual tree branches/columns, but they are evaluated on demand, at most
399once per event. As with filters, Define() calls can be chained with other transformations to create multiple custom
400columns. Define() and Filter() transformations can be concatenated and intermixed at will.
401
402As with filters, it is possible to specify new columns as string expressions. This snippet is analogous to the one above:
403~~~{.cpp}
404ROOT::RDataFrame d("myTree", "file.root");
405auto zMean = d.Define("z", "sqrt(x*x + y*y)").Mean("z");
406std::cout << *zMean << std::endl;
407~~~
408
409Again the names of the columns used in the expression and their types are inferred automatically. The string must be
410valid C++ and it is just-in-time compiled. The process has a small runtime overhead and like with filters it is currently the only possible approach when using PyROOT.
411
412Previously, when showing the different ways an RDataFrame can be created, we showed a constructor that takes a
413number of entries as a parameter. In the following example we show how to combine such an "empty" RDataFrame with Define()
414transformations to create a dataset on the fly. We then save the generated data on disk using the Snapshot() action.
415~~~{.cpp}
416ROOT::RDataFrame d(100); // an RDF that will generate 100 entries (currently empty)
417int x = -1;
418auto d_with_columns = d.Define("x", []()->int { return ++x; }).Define("xx", []()->int { return x*x; });
419d_with_columns.Snapshot("myNewTree", "newfile.root");
420~~~
421This example is slightly more advanced than what we have seen so far. First, it makes use of lambda captures (a
422simple way to make external variables available inside the body of C++ lambdas) to act on the same variable `x` from
423both Define() transformations. Second, we have *stored* the transformed dataframe in a variable. This is always
424possible, since at each point of the transformation chain users can store the status of the dataframe for further use (more
425on this [below](#callgraphs)).
426
427You can read more about defining new columns [here](#custom-columns).
428
429\image html RDF_Graph.png "A graph composed of two branches, one starting with a filter and one with a define. The end point of a branch is always an action."
430
431
432### Running on a range of entries
433It is sometimes necessary to limit the processing of the dataset to a range of entries. For this reason, the RDataFrame
434offers the concept of ranges as a node of the RDataFrame chain of transformations; this means that filters, columns and
435actions can be concatenated to and intermixed with Range()s. If a range is specified after a filter, the range will act
436exclusively on the entries passing the filter -- it will not even count the other entries! The same goes for a Range()
437hanging from another Range(). Here are some commented examples:
438~~~{.cpp}
439ROOT::RDataFrame d("myTree", "file.root");
440// Here we store a dataframe that loops over only the first 30 entries in a variable
441auto d30 = d.Range(30);
442// This is how you pick all entries from 15 onwards
443auto d15on = d.Range(15, 0);
444// We can specify a stride too, in this case we pick an event every 3
445auto d15each3 = d.Range(0, 15, 3);
446~~~
447Note that ranges are not available when multi-threading is enabled. More information on ranges is available
448[here](#ranges).
449
450### Executing multiple actions in the same event loop
451As a final example let us apply two different cuts on branch "MET" and fill two different histograms with the "pt_v" of
452the filtered events.
453By now, you should be able to easily understand what is happening:
454~~~{.cpp}
455RDataFrame d("treeName", "file.root");
456auto h1 = d.Filter("MET > 10").Histo1D("pt_v");
457auto h2 = d.Histo1D("pt_v");
458h1->Draw(); // event loop is run once here
459h2->Draw("SAME"); // no need to run the event loop again
460~~~
461RDataFrame executes all above actions by **running the event-loop only once**. The trick is that actions are not
462executed at the moment they are called, but they are **lazy**, i.e. delayed until the moment one of their results is
463accessed through the smart pointer. At that time, the event loop is triggered and *all* results are produced
464simultaneously.
465
466### Properly exploiting RDataFrame laziness
467
468For yet another example of the difference between the correct and incorrect running of the event-loop, see the following
469two code snippets. We assume our ROOT file has branches a, b and c.
470
471The correct way - the dataset is only processed once.
472~~~{.py}
473df_correct = ROOT.RDataFrame(treename, filename);
474
475h_a = df_correct.Histo1D("a")
476h_b = df_correct.Histo1D("b")
477h_c = df_correct.Histo1D("c")
478
479h_a_val = h_a.GetValue()
480h_b_val = h_b.GetValue()
481h_c_val = h_c.GetValue()
482
483print(f"How many times was the data set processed? {df_wrong.GetNRuns()} time.") # The answer will be 1 time.
484~~~
485
486An incorrect way - the dataset is processed three times.
487~~~{.py}
488df_incorrect = ROOT.RDataFrame(treename, filename);
489
490h_a = df_incorrect.Histo1D("a")
491h_a_val = h_a.GetValue()
492
493h_b = df_incorrect.Histo1D("b")
494h_b_val = h_b.GetValue()
495
496h_c = df_incorrect.Histo1D("c")
497h_c_val = h_c.GetValue()
498
499print(f"How many times was the data set processed? {df_wrong.GetNRuns()} times.") # The answer will be 3 times.
500~~~
501
502It is therefore good practice to declare all your transformations and actions *before* accessing their results, allowing
503RDataFrame to run the loop once and produce all results in one go.
504
505### Going parallel
506Let's say we would like to run the previous examples in parallel on several cores, dividing events fairly between cores.
507The only modification required to the snippets would be the addition of this line *before* constructing the main
508dataframe object:
509~~~{.cpp}
510ROOT::EnableImplicitMT();
511~~~
512Simple as that. More details are given [below](#parallel-execution).
513
514\anchor working_with_collections
515## Working with collections and object selections
516
517RDataFrame reads collections as the special type [ROOT::RVec](https://root.cern/doc/master/classROOT_1_1VecOps_1_1RVec.html): for example, a column containing an array of floating point numbers can be read as a ROOT::RVecF. C-style arrays (with variable or static size), STL vectors and most other collection types can be read this way.
518
519RVec is a container similar to std::vector (and can be used just like a std::vector) but it also offers a rich interface to operate on the array elements in a vectorised fashion, similarly to Python's NumPy arrays.
520
521For example, to fill a histogram with the "pt" of selected particles for each event, Define() can be used to create a column that contains the desired array elements as follows:
522
523~~~{.cpp}
524// h is filled with all the elements of `good_pts`, for each event
525auto h = df.Define("good_pts", [](const ROOT::RVecF &pt) { return pt[pt > 0]; })
526 .Histo1D("good_pts");
527~~~
528
529And in Python:
530
531~~~{.py}
532h = df.Define("good_pts", "pt[pt > 0]").Histo1D("good_pts")
533~~~
534
535Learn more at ROOT::VecOps::RVec.
536
537\anchor transformations
538## Transformations: manipulating data
539\anchor Filters
540### Filters
541A filter is created through a call to `Filter(f, columnList)` or `Filter(filterString)`. In the first overload, `f` can
542be a function, a lambda expression, a functor class, or any other callable object. It must return a `bool` signalling
543whether the event has passed the selection (`true`) or not (`false`). It should perform "read-only" operations on the
544columns, and should not have side-effects (e.g. modification of an external or static variable) to ensure correctness
545when implicit multi-threading is active. The second overload takes a string with a valid C++ expression in which column
546names are used as variable names (e.g. `Filter("x[0] + x[1] > 0")`). This is a convenience feature that comes with a
547certain runtime overhead: C++ code has to be generated on the fly from this expression before using it in the event
548loop. See the paragraph about "Just-in-time compilation" below for more information.
549
550RDataFrame only evaluates filters when necessary: if multiple filters are chained one after another, they are executed
551in order and the first one returning `false` causes the event to be discarded and triggers the processing of the next
552entry. If multiple actions or transformations depend on the same filter, that filter is not executed multiple times for
553each entry: after the first access it simply serves a cached result.
554
555\anchor named-filters-and-cutflow-reports
556#### Named filters and cutflow reports
557An optional string parameter `name` can be passed to the Filter() method to create a **named filter**. Named filters
558work as usual, but also keep track of how many entries they accept and reject.
559
560Statistics are retrieved through a call to the Report() method:
561
562- when Report() is called on the main RDataFrame object, it returns a ROOT::RDF::RResultPtr<RCutFlowReport> relative to all
563named filters declared up to that point
564- when called on a specific node (e.g. the result of a Define() or Filter()), it returns a ROOT::RDF::RResultPtr<RCutFlowReport>
565relative all named filters in the section of the chain between the main RDataFrame and that node (included).
566
567Stats are stored in the same order as named filters have been added to the graph, and *refer to the latest event-loop*
568that has been run using the relevant RDataFrame.
569
570\anchor ranges
571### Ranges
572When RDataFrame is not being used in a multi-thread environment (i.e. no call to EnableImplicitMT() was made),
573Range() transformations are available. These act very much like filters but instead of basing their decision on
574a filter expression, they rely on `begin`,`end` and `stride` parameters.
575
576- `begin`: initial entry number considered for this range.
577- `end`: final entry number (excluded) considered for this range. 0 means that the range goes until the end of the dataset.
578- `stride`: process one entry of the [begin, end) range every `stride` entries. Must be strictly greater than 0.
579
580The actual number of entries processed downstream of a Range() node will be `(end - begin)/stride` (or less if less
581entries than that are available).
582
583Note that ranges act "locally", not based on the global entry count: `Range(10,50)` means "skip the first 10 entries
584*that reach this node*, let the next 40 entries pass, then stop processing". If a range node hangs from a filter node,
585and the range has a `begin` parameter of 10, that means the range will skip the first 10 entries *that pass the
586preceding filter*.
587
588Ranges allow "early quitting": if all branches of execution of a functional graph reached their `end` value of
589processed entries, the event-loop is immediately interrupted. This is useful for debugging and quick data explorations.
590
591\anchor custom-columns
592### Custom columns
593Custom columns are created by invoking `Define(name, f, columnList)`. As usual, `f` can be any callable object
594(function, lambda expression, functor class...); it takes the values of the columns listed in `columnList` (a list of
595strings) as parameters, in the same order as they are listed in `columnList`. `f` must return the value that will be
596assigned to the temporary column.
597
598A new variable is created called `name`, accessible as if it was contained in the dataset from subsequent
599transformations/actions.
600
601Use cases include:
602- caching the results of complex calculations for easy and efficient multiple access
603- extraction of quantities of interest from complex objects
604- branch aliasing, i.e. changing the name of a branch
605
606An exception is thrown if the `name` of the new column/branch is already in use for another branch in the TTree.
607
608It is also possible to specify the quantity to be stored in the new temporary column as a C++ expression with the method
609`Define(name, expression)`. For example this invocation
610
611~~~{.cpp}
612df.Define("pt", "sqrt(px*px + py*py)");
613~~~
614
615will create a new column called "pt" the value of which is calculated starting from the columns px and py. The system
616builds a just-in-time compiled function starting from the expression after having deduced the list of necessary branches
617from the names of the variables specified by the user.
618
619#### Custom columns as function of slot and entry number
620
621It is possible to create custom columns also as a function of the processing slot and entry numbers. The methods that can
622be invoked are:
623- `DefineSlot(name, f, columnList)`. In this case the callable f has this signature `R(unsigned int, T1, T2, ...)`: the
624first parameter is the slot number which ranges from 0 to ROOT::GetThreadPoolSize() - 1.
625- `DefineSlotEntry(name, f, columnList)`. In this case the callable f has this signature `R(unsigned int, ULong64_t,
626T1, T2, ...)`: the first parameter is the slot number while the second one the number of the entry being processed.
627
628\anchor actions
629## Actions: getting results
630### Instant and lazy actions
631Actions can be **instant** or **lazy**. Instant actions are executed as soon as they are called, while lazy actions are
632executed whenever the object they return is accessed for the first time. As a rule of thumb, actions with a return value
633are lazy, the others are instant.
634
635### Return type of a lazy action
636
637When a lazy action is called, it returns a \link ROOT::RDF::RResultPtr ROOT::RDF::RResultPtr<T>\endlink, where T is the
638type of the result of the action. The final result will be stored in the `RResultPtr`, and can be retrieved by
639dereferencing it or via its `GetValue` method. Retrieving the result also starts the event loop if the result
640hasn't been produced yet.
641
642The RResultPtr shares ownership of the result object. To directly access result, use:
643~~~{.cpp}
644ROOT::RDF::RResultPtr<TH1D> histo = rdf.Histo1D(...);
645histo->Draw(); // Starts running the event loop
646~~~
647
648To return results from functions, a copy of the underlying shared_ptr can be obtained:
649~~~{.cpp}
650std::shared_ptr<TH1D> ProduceResult(const char *columnname) {
651 ROOT::RDF::RResultPtr<TH1D> histo = rdf.Histo1D(*h, columname);
652 return histo.GetSharedPtr(); // Runs the event loop
653}
654~~~
655If the result had been returned by reference or bare pointer, it would have gotten destroyed
656when the function exits.
657
658To share ownership but not produce the result ("keep it lazy"), copy the RResultPtr:
659~~~{.cpp}
660std::vector<RResultPtr<TH1D>> allHistograms;
661ROOT::RDF::RResultPtr<TH1D> BookHistogram(const char *columnname) {
662 ROOT::RDF::RResultPtr<TH1D> histo = rdf.Histo1D(*h, columname);
663 allHistograms.push_back(histo); // Will not produce the result yet
664 return histo;
665}
666~~~
667
668
669### Actions that return collections
670
671If the type of the return value of an action is a collection, e.g. `std::vector<int>`, you can iterate its elements
672directly through the wrapping `RResultPtr`:
673
674~~~{.cpp}
675ROOT::RDataFrame df{5};
676auto df1 = df.Define("x", []{ return 42; });
677for (const auto &el: df1.Take<int>("x")){
678 std::cout << "Element: " << el << "\n";
679}
680~~~
681
682~~~{.py}
683df = ROOT.RDataFrame(5).Define("x", "42")
684for el in df.Take[int]("x"):
685 print(f"Element: {el}")
686~~~
687
688### Actions and readers
689
690An action that needs values for its computations will request it from a reader, e.g. a column created via `Define` or
691available from the input dataset. The action will request values from each column of the list of input columns (either
692inferred or specified by the user), in order. For example:
693
694~~~{.cpp}
695ROOT::RDataFrame df{1};
696auto df1 = df.Define("x", []{ return 11; });
697auto df2 = df1.Define("y", []{ return 22; });
698auto graph = df2.Graph<int, int>("x","y");
699~~~
700
701The `Graph` action is going to request first the value from column "x", then that of column "y". Specifically, the order
702of execution of the operations of nodes in this branch of the computation graph is guaranteed to be top to bottom.
703
704\anchor rdf_distrdf
705## Distributed execution
706
707RDataFrame applications can be executed in parallel through distributed computing frameworks on a set of remote machines
708thanks to the Python package `ROOT.RDF.Distributed`. This **Python-only** package allows to scale the
709optimized performance RDataFrame can achieve on a single machine to multiple nodes at the same time. It is designed so
710that different backends can be easily plugged in, currently supporting [Apache Spark](http://spark.apache.org/) and
711[Dask](https://dask.org/). Here is a minimal example usage of distributed RDataFrame:
712
713~~~{.py}
714import ROOT
715from distributed import Client
716# It still accepts the same constructor arguments as traditional RDataFrame
717# but needs a client object which allows connecting to one of the supported
718# schedulers (read more info below)
719client = Client(...)
720df = ROOT.RDataFrame("mytree", "myfile.root", executor=client)
721
722# Continue the application with the traditional RDataFrame API
723sum = df.Filter("x > 10").Sum("y")
724h = df.Histo1D(("name", "title", 10, 0, 10), "x")
725
726print(sum.GetValue())
727h.Draw()
728~~~
729
730The main goal of this package is to support running any RDataFrame application distributedly. Nonetheless, not all
731parts of the RDataFrame API currently work with this package. The subset that is currently available is:
732- Alias
733- AsNumpy
734- Count
735- DefaultValueFor
736- Define
737- DefinePerSample
738- Filter
739- FilterAvailable
740- FilterMissing
741- Graph
742- Histo[1,2,3]D
743- HistoND, HistoNSparseD
744- Max
745- Mean
746- Min
747- Profile[1,2,3]D
748- Redefine
749- Snapshot
750- Stats
751- StdDev
752- Sum
753- Systematic variations: Vary and [VariationsFor](\ref ROOT::RDF::Experimental::VariationsFor).
754- Parallel submission of distributed graphs: [RunGraphs](\ref ROOT::RDF::RunGraphs).
755- Information about the dataframe: GetColumnNames.
756
757with support for more operations coming in the future. Currently, to the supported data sources belong TTree, TChain, RNTuple and RDatasetSpec.
758
759### Connecting to a Spark cluster
760
761In order to distribute the RDataFrame workload, you can connect to a Spark cluster you have access to through the
762official [Spark API](https://spark.apache.org/docs/latest/rdd-programming-guide.html#initializing-spark), then hook the
763connection instance to the distributed `RDataFrame` object like so:
764
765~~~{.py}
766import pyspark
767import ROOT
768
769# Create a SparkContext object with the right configuration for your Spark cluster
770conf = SparkConf().setAppName(appName).setMaster(master)
771sc = SparkContext(conf=conf)
772
773# The Spark RDataFrame constructor accepts an optional "sparkcontext" parameter
774# and it will distribute the application to the connected cluster
775df = ROOT.RDataFrame("mytree", "myfile.root", executor = sc)
776~~~
777
778Note that with the usage above the case of `executor = None` is not supported. One
779can explicitly create a `ROOT.RDF.Distributed.Spark.RDataFrame` object
780in order to get a default instance of
781[SparkContext](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.SparkContext.html)
782in case it is not already provided as argument.
783
784### Connecting to a Dask cluster
785
786Similarly, you can connect to a Dask cluster by creating your own connection object which internally operates with one
787of the cluster schedulers supported by Dask (more information in the
788[Dask distributed docs](http://distributed.dask.org/en/stable/)):
789
790~~~{.py}
791import ROOT
792from dask.distributed import Client
793# In a Python script the Dask client needs to be initalized in a context
794# Jupyter notebooks / Python session don't need this
795if __name__ == "__main__":
796 # With an already setup cluster that exposes a Dask scheduler endpoint
797 client = Client("dask_scheduler.domain.com:8786")
798
799 # The Dask RDataFrame constructor accepts the Dask Client object as an optional argument
800 df = ROOT.RDataFrame("mytree","myfile.root", executor=client)
801 # Proceed as usual
802 df.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
803~~~
804
805Note that with the usage above the case of `executor = None` is not supported. One
806can explicitly create a `ROOT.RDF.Distributed.Dask.RDataFrame` object
807in order to get a default instance of
808[distributed.Client](http://distributed.dask.org/en/stable/api.html#distributed.Client)
809in case it is not already provided as argument. This will run multiple processes
810on the local machine using all available cores.
811
812### Choosing the number of distributed tasks
813
814A distributed RDataFrame has internal logic to define in how many chunks the input dataset will be split before sending
815tasks to the distributed backend. Each task reads and processes one of said chunks. The logic is backend-dependent, but
816generically tries to infer how many cores are available in the cluster through the connection object. The number of
817tasks will be equal to the inferred number of cores. There are cases where the connection object of the chosen backend
818doesn't have information about the actual resources of the cluster. An example of this is when using Dask to connect to
819a batch system. The client object created at the beginning of the application does not automatically know how many cores
820will be available during distributed execution, since the jobs are submitted to the batch system after the creation of
821the connection. In such cases, the logic is to default to process the whole dataset in 2 tasks.
822
823The number of tasks submitted for distributed execution can be also set programmatically, by providing the optional
824keyword argument `npartitions` when creating the RDataFrame object. This parameter is accepted irrespectively of the
825backend used:
826
827~~~{.py}
828import ROOT
829
830if __name__ == "__main__":
831 # The `npartitions` optional argument tells the RDataFrame how many tasks are desired
832 df = ROOT.RDataFrame("mytree", "myfile.root", executor=SupportedExecutor(...), npartitions=NPARTITIONS)
833 # Proceed as usual
834 df.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
835~~~
836
837Note that when processing a TTree or TChain dataset, the `npartitions` value should not exceed the number of clusters in
838the dataset. The number of clusters in a TTree can be retrieved by typing `rootls -lt myfile.root` at a command line.
839
840### Distributed FromSpec
841
842RDataFrame can be also built from a JSON sample specification file using the FromSpec function. In distributed mode, two arguments need to be provided: the path to the specification
843jsonFile (same as for local RDF case) and an additional executor argument - in the same manner as for the RDataFrame constructors above - an executor can either be a spark connection or a dask client.
844If no second argument is given, the local version of FromSpec will be run. Here is an example of FromSpec usage in distributed RDF using either spark or dask backends.
845For more information on FromSpec functionality itself please refer to [FromSpec](\ref rdf-from-spec) documentation. Note that adding metadata and friend information is supported,
846but adding the global range will not be respected in the distributed execution.
847
848Using spark:
849~~~{.py}
850import pyspark
851import ROOT
852
853conf = SparkConf().setAppName(appName).setMaster(master)
854sc = SparkContext(conf=conf)
855
856# The FromSpec function accepts an optional "sparkcontext" parameter
857# and it will distribute the application to the connected cluster
858df_fromspec = ROOT.RDF.Experimental.FromSpec("myspec.json", executor = sc)
859# Proceed as usual
860df_fromspec.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
861~~~
862
863Using dask:
864~~~{.py}
865import ROOT
866from dask.distributed import Client
867
868if __name__ == "__main__":
869 client = Client("dask_scheduler.domain.com:8786")
870
871 # The FromSpec function accepts the Dask Client object as an optional argument
872 df_fromspec = ROOT.RDF.Experimental.FromSpec("myspec.json", executor=client)
873 # Proceed as usual
874 df_fromspec.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
875~~~
876
877### Distributed Snapshot
878
879The Snapshot operation behaves slightly differently when executed distributedly. First off, it requires the path
880supplied to the Snapshot call to be accessible from any worker of the cluster and from the client machine (in general
881it should be provided as an absolute path). Another important difference is that `n` separate files will be produced,
882where `n` is the number of dataset partitions. As with local RDataFrame, the result of a Snapshot on a distributed
883RDataFrame is another distributed RDataFrame on which we can define a new computation graph and run more distributed
884computations.
885
886### Distributed RunGraphs
887
888Submitting multiple distributed RDataFrame executions is supported through the RunGraphs function. Similarly to its
889local counterpart, the function expects an iterable of objects representing an RDataFrame action. Each action will be
890triggered concurrently to send multiple computation graphs to a distributed cluster at the same time:
891
892~~~{.py}
893import ROOT
894
895# Create 3 different dataframes and book an histogram on each one
896histoproxies = [
897 ROOT.RDataFrame(100, executor=SupportedExecutor(...))
898 .Define("x", "rdfentry_")
899 .Histo1D(("name", "title", 10, 0, 100), "x")
900 for _ in range(4)
901]
902
903# Execute the 3 computation graphs
904ROOT.RDF.RunGraphs(histoproxies)
905# Retrieve all the histograms in one go
906histos = [histoproxy.GetValue() for histoproxy in histoproxies]
907~~~
908
909Every distributed backend supports this feature and graphs belonging to different backends can be still triggered with
910a single call to RunGraphs (e.g. it is possible to send a Spark job and a Dask job at the same time).
911
912### Histogram models in distributed mode
913
914When calling a Histo*D operation in distributed mode, remember to pass to the function the model of the histogram to be
915computed, e.g. the axis range and the number of bins:
916
917~~~{.py}
918import ROOT
919
920if __name__ == "__main__":
921 df = ROOT.RDataFrame("mytree","myfile.root",executor=SupportedExecutor(...)).Define("x","someoperation")
922 # The model can be passed either as a tuple with the arguments in the correct order
923 df.Histo1D(("name", "title", 10, 0, 10), "x")
924 # Or by creating the specific struct
925 model = ROOT.RDF.TH1DModel("name", "title", 10, 0, 10)
926 df.Histo1D(model, "x")
927~~~
928
929Without this, two partial histograms resulting from two distributed tasks would have incompatible binning, thus leading
930to errors when merging them. Failing to pass a histogram model will raise an error on the client side, before starting
931the distributed execution.
932
933### Live visualization in distributed mode with dask
934
935The live visualization feature allows real-time data representation of plots generated during the execution
936of a distributed RDataFrame application.
937It enables visualizing intermediate results as they are computed across multiple nodes of a Dask cluster
938by creating a canvas and continuously updating it as partial results become available.
939
940The LiveVisualize() function can be imported from the Python package **ROOT.RDF.Distributed**:
941
942~~~{.py}
943import ROOT
944
945LiveVisualize = ROOT.RDF.Distributed.LiveVisualize
946~~~
947
948The function takes drawable objects (e.g. histograms) and optional callback functions as argument, it accepts 4 different input formats:
949
950- Passing a list or tuple of drawables:
951You can pass a list or tuple containing the plots you want to visualize. For example:
952
953~~~{.py}
954LiveVisualize([h_gaus, h_exp, h_random])
955~~~
956
957- Passing a list or tuple of drawables with a global callback function:
958You can also include a global callback function that will be applied to all plots. For example:
959
960~~~{.py}
961def set_fill_color(hist):
962 hist.SetFillColor("kBlue")
963
964LiveVisualize([h_gaus, h_exp, h_random], set_fill_color)
965~~~
966
967- Passing a Dictionary of drawables and callback functions:
968For more control, you can create a dictionary where keys are plots and values are corresponding (optional) callback functions. For example:
969
970~~~{.py}
971plot_callback_dict = {
972 graph: set_marker,
973 h_exp: fit_exp,
974 tprofile_2d: None
975}
976
977LiveVisualize(plot_callback_dict)
978~~~
979
980- Passing a Dictionary of drawables and callback functions with a global callback function:
981You can also combine a dictionary of plots and callbacks with a global callback function:
982
983~~~{.py}
984LiveVisualize(plot_callback_dict, write_to_tfile)
985~~~
986
987\note The allowed operations to pass to LiveVisualize are:
988 - Histo1D(), Histo2D(), Histo3D()
989 - Graph()
990 - Profile1D(), Profile2D()
991
992\warning The Live Visualization feature is only supported for the Dask backend.
993
994### Injecting C++ code and using external files into distributed RDF script
995
996Distributed RDF provides an interface for the users who want to inject the C++ code (via header files, shared libraries or declare the code directly)
997into their distributed RDF application, or their application needs to use information from external files which should be distributed
998to the workers (for example, a JSON or a txt file with necessary parameters information).
999
1000The examples below show the usage of these interface functions: firstly, how this is done in a local Python
1001RDF application and secondly, how it is done distributedly.
1002
1003#### Include and distribute header files.
1004
1005~~~{.py}
1006# Local RDataFrame script
1007ROOT.gInterpreter.AddIncludePath("myheader.hxx")
1008df.Define(...)
1009
1010# Distributed RDF script
1011ROOT.RDF.Distributed.DistributeHeaders("myheader.hxx")
1012df.Define(...)
1013~~~
1014
1015#### Load and distribute shared libraries
1016
1017~~~{.py}
1018# Local RDataFrame script
1019ROOT.gSystem.Load("my_library.so")
1020df.Define(...)
1021
1022# Distributed RDF script
1023ROOT.RDF.Distributed.DistributeSharedLibs("my_library.so")
1024df.Define(...)
1025~~~
1026
1027#### Declare and distribute the cpp code
1028
1029The cpp code is always available to all dataframes.
1030
1031~~~{.py}
1032# Local RDataFrame script
1033ROOT.gInterpreter.Declare("my_code")
1034df.Define(...)
1035
1036# Distributed RDF script
1037ROOT.RDF.Distributed.DistributeCppCode("my_code")
1038df.Define(...)
1039~~~
1040
1041#### Distribute additional files (other than headers or shared libraries).
1042
1043~~~{.py}
1044# Local RDataFrame script is not applicable here as local RDF application can simply access the external files it needs.
1045
1046# Distributed RDF script
1047ROOT.RDF.Distributed.DistributeFiles("my_file")
1048df.Define(...)
1049~~~
1050
1051\anchor parallel-execution
1052## Performance tips and parallel execution
1053As pointed out before in this document, RDataFrame can transparently perform multi-threaded event loops to speed up
1054the execution of its actions. Users have to call ROOT::EnableImplicitMT() *before* constructing the RDataFrame
1055object to indicate that it should take advantage of a pool of worker threads. **Each worker thread processes a distinct
1056subset of entries**, and their partial results are merged before returning the final values to the user.
1057
1058By default, RDataFrame will use as many threads as the hardware supports, using up **all** the resources on
1059a machine. This might be undesirable on shared computing resources such as a batch cluster. Therefore, when running on shared computing resources, use
1060~~~{.cpp}
1061ROOT::EnableImplicitMT(numThreads)
1062~~~
1063or export an environment variable:
1064~~~{.sh}
1065export ROOT_MAX_THREADS=numThreads
1066root.exe rdfAnalysis.cxx
1067# or
1068ROOT_MAX_THREADS=4 python rdfAnalysis.py
1069~~~
1070replacing `numThreads` with the number of CPUs/slots that were allocated for this job.
1071
1072\warning There are no guarantees on the order in which threads will process the batches of
1073entries. In particular, note that this means that, for multi-thread event loops, there is no
1074guarantee on the order in which Snapshot() will _write_ entries: they could be scrambled with respect to the input
1075dataset. The values of the special `rdfentry_` column will also not correspond to the entry numbers in the input dataset (e.g. TChain) in multi-threaded
1076runs. Likewise, Take(), AsNumpy(), ... do not preserve the original ordering.
1077
1078### Thread-safety of user-defined expressions
1079RDataFrame operations such as Histo1D() or Snapshot() are guaranteed to work correctly in multi-thread event loops.
1080User-defined expressions, such as strings or lambdas passed to Filter(), Define(), Foreach(), Reduce() or Aggregate()
1081will have to be thread-safe, i.e. it should be possible to call them concurrently from different threads.
1082
1083Note that simple Filter() and Define() transformations will inherently satisfy this requirement: Filter() / Define()
1084expressions will often be *pure* in the functional programming sense (no side-effects, no dependency on external state),
1085which eliminates all risks of race conditions.
1086
1087In order to facilitate writing of thread-safe operations, some RDataFrame features such as Foreach(), Define() or \link ROOT::RDF::RResultPtr::OnPartialResult OnPartialResult()\endlink
1088offer thread-aware counterparts (ForeachSlot(), DefineSlot(), \link ROOT::RDF::RResultPtr::OnPartialResultSlot OnPartialResultSlot()\endlink): their only difference is that they
1089will pass an extra `slot` argument (an unsigned integer) to the user-defined expression. When calling user-defined code
1090concurrently, RDataFrame guarantees that different threads will see different values of the `slot` parameter,
1091where `slot` will be a number between 0 and `GetNSlots() - 1`. Note that not all slot numbers may be reached, or some slots may be reached more often depending on how computation tasks are scheduled.
1092In other words, within a slot, computations run sequentially, and events are processed sequentially.
1093Note that the same slot might be associated to different threads over the course of a single event loop, but two threads
1094will never receive the same slot at the same time.
1095This extra parameter might facilitate writing safe parallel code by having each thread write/modify a different
1096processing slot, e.g. a different element of a list. See [here](#generic-actions) for an example usage of ForeachSlot().
1097
1098### Parallel execution of multiple RDataFrame event loops
1099A complex analysis may require multiple separate RDataFrame computation graphs to produce all desired results. This poses the challenge that the
1100event loops of each computation graph can be parallelized, but the different loops run sequentially, one after the other.
1101On many-core architectures it might be desirable to run different event loops concurrently to improve resource usage.
1102ROOT::RDF::RunGraphs() allows running multiple RDataFrame event loops concurrently:
1103~~~{.cpp}
1104ROOT::EnableImplicitMT();
1105ROOT::RDataFrame df1("tree1", "f1.root");
1106ROOT::RDataFrame df2("tree2", "f2.root");
1107auto histo1 = df1.Histo1D("x");
1108auto histo2 = df2.Histo1D("y");
1109
1110// just accessing result pointers, the event loops of separate RDataFrames run one after the other
1111histo1->Draw(); // runs first multi-thread event loop
1112histo2->Draw(); // runs second multi-thread event loop
1113
1114// alternatively, with ROOT::RDF::RunGraphs, event loops for separate computation graphs can run concurrently
1115ROOT::RDF::RunGraphs({histo1, histo2});
1116histo1->Draw(); // results can then be used as usual
1117~~~
1118
1119### Performance considerations
1120
1121To obtain the maximum performance out of RDataFrame, make sure to avoid just-in-time compiled versions of transformations and actions if at all possible.
1122For instance, `Filter("x > 0")` requires just-in-time compilation of the corresponding C++ logic, while the equivalent `Filter([](float x) { return x > 0.; }, {"x"})` does not.
1123Similarly, `Histo1D("x")` requires just-in-time compilation after the type of `x` is retrieved from the dataset, while `Histo1D<float>("x")` does not; the latter spelling
1124should be preferred for performance-critical applications.
1125
1126Python applications cannot easily specify template parameters or pass C++ callables to RDataFrame.
1127See [Python interface](classROOT_1_1RDataFrame.html#python) for possible ways to speed up hot paths in this case.
1128
1129Just-in-time compilation happens once, right before starting an event loop. To reduce the runtime cost of this step, make sure to book all operations *for all RDataFrame computation graphs*
1130before the first event loop is triggered: just-in-time compilation will happen once for all code required to be generated up to that point, also across different computation graphs.
1131
1132Also make sure not to count the just-in-time compilation time (which happens once before the event loop and does not depend on the size of the dataset) as part of the event loop runtime (which scales with the size of the dataset). RDataFrame has an experimental logging feature that simplifies measuring the time spent in just-in-time compilation and in the event loop (as well as providing some more interesting information). See [Activating RDataFrame execution logs](\ref rdf-logging).
1133
1134### Memory usage
1135
1136There are two reasons why RDataFrame may consume more memory than expected.
1137
1138#### 1. Histograms in multi-threaded mode
1139In multithreaded runs, each worker thread will create a local copy of histograms, which e.g. in case of many (possibly multi-dimensional) histograms with fine binning can result in significant memory consumption during the event loop.
1140The thread-local copies of the results are destroyed when the final result is produced. Reducing the number of threads or using coarser binning will reduce the memory usage.
1141For three-dimensional histograms, the number of clones can be reduced using ROOT::RDF::Experimental::ThreadsPerTH3().
1142~~~{.cpp}
1143#include "ROOT/RDFHelpers.hxx"
1144
1145// Make four threads share a TH3 instance:
1146ROOT::RDF::Experimental::ThreadsPerTH3(4);
1147ROOT::RDataFrame rdf(...);
1148~~~
1149
1150When TH3s are shared among threads, TH3D will either be filled under lock (slowing down the execution) or using atomics if C++20 is available. The latter is significantly faster.
1151The best value for `ThreadsPerTH3` depends on the computation graph that runs. Use lower numbers such as 4 for speed and higher memory consumption, and higher numbers such as 16 for
1152slower execution and memory savings.
1153
1154#### 2. Just-in-time compilation
1155Secondly, just-in-time compilation of string expressions or non-templated actions (see the previous paragraph) causes Cling, ROOT's C++ interpreter, to allocate some memory for the generated code that is only released at the end of the application. This commonly results in memory usage creep in long-running applications that create many RDataFrames one after the other. Possible mitigations include creating and running each RDataFrame event loop in a sub-process, or booking all operations for all different RDataFrame computation graphs before the first event loop is triggered, so that the interpreter is invoked only once for all computation graphs:
1156
1157~~~{.cpp}
1158// assuming df1 and df2 are separate computation graphs, do:
1159auto h1 = df1.Histo1D("x");
1160auto h2 = df2.Histo1D("y");
1161h1->Draw(); // we just-in-time compile everything needed by df1 and df2 here
1162h2->Draw("SAME");
1163
1164// do not:
1165auto h1 = df1.Histo1D("x");
1166h1->Draw(); // we just-in-time compile here
1167auto h2 = df2.Histo1D("y");
1168h2->Draw("SAME"); // we just-in-time compile again here, as the second Histo1D call is new
1169~~~
1170
1171\anchor more-features
1172## More features
1173Here is a list of the most important features that have been omitted in the "Crash course" for brevity.
1174You don't need to read all these to start using RDataFrame, but they are useful to save typing time and runtime.
1175
1176\anchor systematics
1177### Systematic variations
1178
1179Starting from ROOT v6.26, RDataFrame provides a flexible syntax to define systematic variations.
1180This is done in two steps: a) register variations for one or more existing columns using Vary() and b) extract variations
1181of normal RDataFrame results using \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()". In between these steps, no other change
1182to the analysis code is required: the presence of systematic variations for certain columns is automatically propagated
1183through filters, defines and actions, and RDataFrame will take these dependencies into account when producing varied
1184results. \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" is included in header `ROOT/RDFHelpers.hxx`. The compiled C++ programs must include this header
1185explicitly, this is not required for ROOT macros.
1186
1187An example usage of Vary() and \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" in C++:
1188
1189~~~{.cpp}
1190auto nominal_hx =
1191 df.Vary("pt", "ROOT::RVecD{pt*0.9f, pt*1.1f}", {"down", "up"})
1192 .Filter("pt > pt_cut")
1193 .Define("x", someFunc, {"pt"})
1194 .Histo1D<float>("x");
1195
1196// request the generation of varied results from the nominal_hx
1197ROOT::RDF::Experimental::RResultMap<TH1D> hx = ROOT::RDF::Experimental::VariationsFor(nominal_hx);
1198
1199// the event loop runs here, upon first access to any of the results or varied results:
1200hx["nominal"].Draw(); // same effect as nominal_hx->Draw()
1201hx["pt:down"].Draw("SAME");
1202hx["pt:up"].Draw("SAME");
1203~~~
1204
1205A list of variation "tags" is passed as the last argument to Vary(). The tags give names to the varied values that are returned
1206as elements of an RVec of the appropriate C++ type. The number of variation tags must correspond to the number of elements of
1207this RVec (2 in the example above: the first element will correspond to the tag "down", the second
1208to the tag "up"). The _full_ variation name will be composed of the varied column name and the variation tags (e.g.
1209"pt:down", "pt:up" in this example). Python usage looks similar.
1210
1211Note how we use the "pt" column as usual in the Filter() and Define() calls and we simply use "x" as the value to fill
1212the resulting histogram. To produce the varied results, RDataFrame will automatically execute the Filter and Define
1213calls for each variation and fill the histogram with values and cuts that depend on the variation.
1214
1215There is no limitation to the complexity of a Vary() expression. Just like for the Define() and Filter() calls, users are
1216not limited to string expressions but they can also pass any valid C++ callable, including lambda functions and
1217complex functors. The callable can be applied to zero or more existing columns and it will always receive their
1218_nominal_ value in input.
1219
1220\note - Currently, VariationsFor() and RResultMap are in the `ROOT::RDF::Experimental` namespace, to indicate that these
1221 interfaces can still evolve and improve based on user feedback. Please send feedback on https://github.com/root-project/root.
1222
1223\note - The results of Display() cannot be varied (i.e. it is not possible to call \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()").
1224
1225See the Vary() method for more information and [this tutorial](https://root.cern/doc/master/df106__HiggsToFourLeptons_8C.html)
1226for an example usage of Vary and \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" in the analysis.
1227
1228
1229\anchor snapshot-with-variations
1230#### Snapshot with Variations
1231To combine Vary() and Snapshot(), the \ref ROOT::RDF::RSnapshotOptions::fIncludeVariations "fIncludeVariations" flag in \ref ROOT::RDF::RSnapshotOptions
1232"RSnapshotOptions" has to be set, as demonstrated in the following example:
1233
1234~~~{.cpp}
1235 ROOT::RDF::RSnapshotOptions options;
1236 options.fIncludeVariations = true;
1237
1238 auto s = ROOT::RDataFrame(N)
1239 .Define("x", [](ULong64_t e) -> float { return 10.f * e; }, {"rdfentry_"})
1240 .Vary(
1241 "x", [](float x) { return ROOT::RVecF{x - 0.5f, x + 0.5f}; }, {"x"}, 2, "xVar")
1242 .Define("y", [](float x) -> double { return -1. * x; }, {"x"})
1243 .Filter(cuts, {"x", "y"})
1244 .Snapshot("t", filename, {"x", "y"}, options);
1245~~~
1246
1247This snapshot action will create an output file with a dataset that includes the columns requested in the Snapshot() call
1248and their corresponding variations:
1249
1250| Row | `x` | `y` | `x__xVar_0` | `y__xVar_0` | `x__xVar_1` | `y__xVar_1` | `R_rdf_mask_t_0`|
1251|------|----------------:|----------------:|----------------:|----------------:|----------------:|----------------:|----------------:|
1252| 0 | 0 | -0 | -0.5 | 0.5 | 0.5 | -0.5 | 111 |
1253| 1 | 10 | -10 | 9.5 | -9.5 | 10.5 | -10.5 | 111 |
1254| 2 | 20 | -20 | 19.5 | -19.5 | 20.5 | -20.5 | 111 |
1255| 3 | 30 | -30 | 29.5 | -29.5 | 30.5 | -30.5 | 111 |
1256| 4 | 40 | -40 | 39.5 | -39.5 | 40.5 | -40.5 | 111 |
1257| 5 | 0 | 0 | 49.5 | -49.5 | 0 | 0 | 010 |
1258| 6 |`* Not written *`| | | | | | |
1259| 7 | 0 | 0 | 0 | 0 | 70.5 | -70.5 | 100 |
1260
1261"x" and "y" are the nominal values, and their variations are snapshot as `<ColumnName>__<VariationName>_<VariationTag>`.
1262
1263When Filters are employed, some variations might not pass the selection cuts (like the rows 5 and 7 in the table above).
1264In that case, RDataFrame will snapshot the filtered columns in a memory-efficient way by writing zero into the memory of fundamental types, or write a
1265default-constructed object in case of classes. If none of the filters pass like in row 6, the entire event is omitted from the snapshot.
1266
1267To tell apart a genuine `0` (like `x` in row 0) from a case where nominal or variation didn't pass a selection,
1268RDataFrame writes a bitmask for each event, see last column of the table above. Every bit indicates whether its
1269associated columns are valid. The bitmask is implemented as a 64-bit `std::bitset` in memory, written to the output
1270dataset as a `std::uin64_t`. For every 64 columns, a new bitmask column is added to the output dataset.
1271
1272For each column that gets varied, the nominal and all variation columns are each assigned a bit to denote whether their
1273entries are valid. A mapping of column names to the corresponding bitmask is placed in the same file as the output
1274dataset, with a name that follows the pattern `"R_rdf_column_to_bitmask_mapping_<NAME_OF_THE_DATASET>"`. It is of type
1275`std::unordered_map<std::string, std::pair<std::string, unsigned int>>`, and maps a column name to the name of the
1276bitmask column and the index of the relevant bit. For example, in the same file as the dataset "Events" there would be
1277an object named `R_rdf_column_to_bitmask_mapping_Events`. This object for example would describe a connection such as:
1278
1279~~~
1280muon_pt --> (R_rdf_mask_Events_0, 42)
1281~~~
1282
1283which means that the validity of the entries in `muon_pt` is established by the bit `42` in the bitmask found in the
1284column `R_rdf_mask_Events_0`.
1285
1286When RDataFrame opens a file, it checks for the existence of this mapping between columns and bitmasks, and loads it automatically if found. As such,
1287RDataFrame makes the treatment of the various bitmap maskings completely transparent to the user.
1288
1289In case certain values are labeled invalid by the corresponding bit, this will result in reading a missing value. The semantics of such a scenario follow the
1290rules described in the \ref missing-values "section on dealing with missing values" and can be dealt with accordingly.
1291
1292\note Snapshot with variations is currently restricted to single-threaded TTree snapshots.
1293
1294
1295#### Varying multiple columns in lockstep
1296
1297In the following Python snippet we use the Vary() signature that allows varying multiple columns simultaneously or
1298"in lockstep":
1299
1300~~~{.python}
1301df.Vary(["pt", "eta"],
1302 "RVec<RVecF>{{pt*0.9, pt*1.1}, {eta*0.9, eta*1.1}}",
1303 variationTags=["down", "up"],
1304 variationName="ptAndEta")
1305~~~
1306
1307The expression returns an RVec of two RVecs: each inner vector contains the varied values for one column. The
1308inner vectors follow the same ordering as the column names that are passed as the first argument. Besides the variation tags, in
1309this case we also have to explicitly pass the variation name (here: "ptAndEta") as the default column name does not exist.
1310
1311The above call will produce variations "ptAndEta:down" and "ptAndEta:up".
1312
1313#### Combining multiple variations
1314
1315Even if a result depends on multiple variations, only one variation is applied at a time, i.e. there will be no result produced
1316by applying multiple systematic variations at the same time.
1317For example, in the following example snippet, the RResultMap instance `all_h` will contain keys "nominal", "pt:down",
1318"pt:up", "eta:0", "eta:1", but no "pt:up&&eta:0" or similar:
1319
1320~~~{.cpp}
1321auto df = _df.Vary("pt",
1322 "ROOT::RVecD{pt*0.9, pt*1.1}",
1323 {"down", "up"})
1324 .Vary("eta",
1325 [](float eta) { return RVecF{eta*0.9f, eta*1.1f}; },
1326 {"eta"},
1327 2);
1328
1329auto nom_h = df.Histo2D(histoModel, "pt", "eta");
1330auto all_hs = VariationsFor(nom_h);
1331all_hs.GetKeys(); // returns {"nominal", "pt:down", "pt:up", "eta:0", "eta:1"}
1332~~~
1333
1334Note how we passed the integer `2` instead of a list of variation tags to the second Vary() invocation: this is a
1335shorthand that automatically generates tags 0 to N-1 (in this case 0 and 1).
1336
1337
1338\anchor rnode
1339### RDataFrame objects as function arguments and return values
1340RDataFrame variables/nodes are relatively cheap to copy and it's possible to both pass them to (or move them into)
1341functions and to return them from functions. However, in general each dataframe node will have a different C++ type,
1342which includes all available compile-time information about what that node does. One way to cope with this complication
1343is to use template functions and/or C++14 auto return types:
1344~~~{.cpp}
1345template <typename RDF>
1346auto ApplySomeFilters(RDF df)
1347{
1348 return df.Filter("x > 0").Filter([](int y) { return y < 0; }, {"y"});
1349}
1350~~~
1351
1352A possibly simpler, C++11-compatible alternative is to take advantage of the fact that any dataframe node can be
1353converted (implicitly or via an explicit cast) to the common type ROOT::RDF::RNode:
1354~~~{.cpp}
1355// a function that conditionally adds a Range to an RDataFrame node.
1356RNode MaybeAddRange(RNode df, bool mustAddRange)
1357{
1358 return mustAddRange ? df.Range(1) : df;
1359}
1360// use as :
1361ROOT::RDataFrame df(10);
1362auto maybeRangedDF = MaybeAddRange(df, true);
1363~~~
1364
1365The conversion to ROOT::RDF::RNode is cheap, but it will introduce an extra virtual call during the RDataFrame event
1366loop (in most cases, the resulting performance impact should be negligible). Python users can perform the conversion with the helper function `ROOT.RDF.AsRNode`.
1367
1368\anchor RDFCollections
1369### Storing RDataFrame objects in collections
1370
1371ROOT::RDF::RNode also makes it simple to store RDataFrame nodes in collections, e.g. a `std::vector<RNode>` or a `std::map<std::string, RNode>`:
1372
1373~~~{.cpp}
1374std::vector<ROOT::RDF::RNode> dfs;
1375dfs.emplace_back(ROOT::RDataFrame(10));
1376dfs.emplace_back(dfs[0].Define("x", "42.f"));
1377~~~
1378
1379\anchor callbacks
1380### Executing callbacks every N events
1381It's possible to schedule execution of arbitrary functions (callbacks) during the event loop.
1382Callbacks can be used e.g. to inspect partial results of the analysis while the event loop is running,
1383drawing a partially-filled histogram every time a certain number of new entries is processed, or
1384displaying a progress bar while the event loop runs.
1385
1386For example one can draw an up-to-date version of a result histogram every 100 entries like this:
1387~~~{.cpp}
1388auto h = df.Histo1D("x");
1389TCanvas c("c","x hist");
1390h.OnPartialResult(100, [&c](TH1D &h_) { c.cd(); h_.Draw(); c.Update(); });
1391// event loop runs here, this final `Draw` is executed after the event loop is finished
1392h->Draw();
1393~~~
1394
1395Callbacks are registered to a ROOT::RDF::RResultPtr and must be callables that takes a reference to the result type as argument
1396and return nothing. RDataFrame will invoke registered callbacks passing partial action results as arguments to them
1397(e.g. a histogram filled with a part of the selected events).
1398
1399Read more on ROOT::RDF::RResultPtr::OnPartialResult() and ROOT::RDF::RResultPtr::OnPartialResultSlot().
1400
1401\anchor default-branches
1402### Default column lists
1403When constructing an RDataFrame object, it is possible to specify a **default column list** for your analysis, in the
1404usual form of a list of strings representing branch/column names. The default column list will be used as a fallback
1405whenever a list specific to the transformation/action is not present. RDataFrame will take as many of these columns as
1406needed, ignoring trailing extra names if present.
1407~~~{.cpp}
1408// use "b1" and "b2" as default columns
1409ROOT::RDataFrame d1("myTree", "file.root", {"b1","b2"});
1410auto h = d1.Filter([](int b1, int b2) { return b1 > b2; }) // will act on "b1" and "b2"
1411 .Histo1D(); // will act on "b1"
1412
1413// just one default column this time
1414ROOT::RDataFrame d2("myTree", "file.root", {"b1"});
1415auto d2f = d2.Filter([](double b2) { return b2 > 0; }, {"b2"}) // we can still specify non-default column lists
1416auto min = d2f.Min(); // returns the minimum value of "b1" for the filtered entries
1417auto vals = d2f.Take<double>(); // return the values for all entries passing the selection as a vector
1418~~~
1419
1420\anchor helper-cols
1421### Special helper columns: rdfentry_ and rdfslot_
1422Every instance of RDataFrame is created with two special columns called `rdfentry_` and `rdfslot_`. The `rdfentry_`
1423column is of type `ULong64_t` and it holds the current entry number while `rdfslot_` is an `unsigned int`
1424holding the index of the current data processing slot.
1425For backwards compatibility reasons, the names `tdfentry_` and `tdfslot_` are also accepted.
1426These columns are ignored by operations such as [Cache](classROOT_1_1RDF_1_1RInterface.html#aaaa0a7bb8eb21315d8daa08c3e25f6c9)
1427or [Snapshot](classROOT_1_1RDF_1_1RInterface.html#a233b7723e498967f4340705d2c4db7f8).
1428
1429\warning Note that in multi-thread event loops the values of `rdfentry_` _do not_ correspond to what would be the entry numbers
1430of a TChain constructed over the same set of ROOT files, as the entries are processed in an unspecified order.
1431
1432\anchor jitting
1433### Just-in-time compilation: column type inference and explicit declaration of column types
1434C++ is a statically typed language: all types must be known at compile-time. This includes the types of the TTree
1435branches we want to work on. For filters, defined columns and some of the actions, **column types are deduced from the
1436signature** of the relevant filter function/temporary column expression/action function:
1437~~~{.cpp}
1438// here b1 is deduced to be `int` and b2 to be `double`
1439df.Filter([](int x, double y) { return x > 0 && y < 0.; }, {"b1", "b2"});
1440~~~
1441If we specify an incorrect type for one of the columns, an exception with an informative message will be thrown at
1442runtime, when the column value is actually read from the dataset: RDataFrame detects type mismatches. The same would
1443happen if we swapped the order of "b1" and "b2" in the column list passed to Filter().
1444
1445Certain actions, on the other hand, do not take a function as argument (e.g. Histo1D()), so we cannot deduce the type of
1446the column at compile-time. In this case **RDataFrame infers the type of the column** from the TTree itself. This
1447is why we never needed to specify the column types for all actions in the above snippets.
1448
1449When the column type is not a common one such as `int`, `double`, `char` or `float` it is nonetheless good practice to
1450specify it as a template parameter to the action itself, like this:
1451~~~{.cpp}
1452df.Histo1D("b1"); // OK, the type of "b1" is deduced at runtime
1453df.Min<MyNumber_t>("myObject"); // OK, "myObject" is deduced to be of type `MyNumber_t`
1454~~~
1455
1456Deducing types at runtime requires the just-in-time compilation of the relevant actions, which has a small runtime
1457overhead, so specifying the type of the columns as template parameters to the action is good practice when performance is a goal.
1458
1459When strings are passed as expressions to Filter() or Define(), fundamental types are passed as constants. This avoids certaincommon mistakes such as typing `x = 0` rather than `x == 0`:
1460
1461~~~{.cpp}
1462// this throws an error (note the typo)
1463df.Define("x", "0").Filter("x = 0");
1464~~~
1465
1466\anchor generic-actions
1467### User-defined custom actions
1468RDataFrame strives to offer a comprehensive set of standard actions that can be performed on each event. At the same
1469time, it allows users to inject their own action code to perform arbitrarily complex data reductions.
1470
1471#### Implementing custom actions with Book()
1472
1473Through the Book() method, users can implement a custom action and have access to the same features
1474that built-in RDataFrame actions have, e.g. hooks to events related to the start, end and execution of the
1475event loop, or the possibility to return a lazy RResultPtr to an arbitrary type of result:
1476
1477~~~{.cpp}
1478#include <ROOT/RDataFrame.hxx>
1479#include <memory>
1480
1481class MyCounter : public ROOT::Detail::RDF::RActionImpl<MyCounter> {
1482 std::shared_ptr<int> fFinalResult = std::make_shared<int>(0);
1483 std::vector<int> fPerThreadResults;
1484
1485public:
1486 // We use a public type alias to advertise the type of the result of this action
1487 using Result_t = int;
1488
1489 MyCounter(unsigned int nSlots) : fPerThreadResults(nSlots) {}
1490
1491 // Called before the event loop to retrieve the address of the result that will be filled/generated.
1492 std::shared_ptr<int> GetResultPtr() const { return fFinalResult; }
1493
1494 // Called at the beginning of the event loop.
1495 void Initialize() {}
1496
1497 // Called at the beginning of each processing task.
1498 void InitTask(TTreeReader *, int) {}
1499
1500 /// Called at every entry.
1501 void Exec(unsigned int slot)
1502 {
1503 fPerThreadResults[slot]++;
1504 }
1505
1506 // Called at the end of the event loop.
1507 void Finalize()
1508 {
1509 *fFinalResult = std::accumulate(fPerThreadResults.begin(), fPerThreadResults.end(), 0);
1510 }
1511
1512 // Called by RDataFrame to retrieve the name of this action.
1513 std::string GetActionName() const { return "MyCounter"; }
1514};
1515
1516int main() {
1517 ROOT::RDataFrame df(10);
1518 ROOT::RDF::RResultPtr<int> resultPtr = df.Book<>(MyCounter{df.GetNSlots()}, {});
1519 // The GetValue call triggers the event loop
1520 std::cout << "Number of processed entries: " << resultPtr.GetValue() << std::endl;
1521}
1522~~~
1523
1524See the Book() method for more information and [this tutorial](https://root.cern/doc/master/df018__customActions_8C.html)
1525for a more complete example.
1526
1527#### Injecting arbitrary code in the event loop with Foreach() and ForeachSlot()
1528
1529Foreach() takes a callable (lambda expression, free function, functor...) and a list of columns and
1530executes the callable on the values of those columns for each event that passes all upstream selections.
1531It can be used to perform actions that are not already available in the interface. For example, the following snippet
1532evaluates the root mean square of column "x":
1533~~~{.cpp}
1534// Single-thread evaluation of RMS of column "x" using Foreach
1535double sumSq = 0.;
1536unsigned int n = 0;
1537df.Foreach([&sumSq, &n](double x) { ++n; sumSq += x*x; }, {"x"});
1538std::cout << "rms of x: " << std::sqrt(sumSq / n) << std::endl;
1539~~~
1540In multi-thread runs, users are responsible for the thread-safety of the expression passed to Foreach():
1541thread will execute the expression concurrently.
1542The code above would need to employ some resource protection mechanism to ensure non-concurrent writing of `rms`; but
1543this is probably too much head-scratch for such a simple operation.
1544
1545ForeachSlot() can help in this situation. It is an alternative version of Foreach() for which the function takes an
1546additional "processing slot" parameter besides the columns it should be applied to. RDataFrame
1547guarantees that ForeachSlot() will invoke the user expression with different `slot` parameters for different concurrent
1548executions (see [Special helper columns: rdfentry_ and rdfslot_](\ref helper-cols) for more information on the slot parameter).
1549We can take advantage of ForeachSlot() to evaluate a thread-safe root mean square of column "x":
1550~~~{.cpp}
1551// Thread-safe evaluation of RMS of column "x" using ForeachSlot
1552ROOT::EnableImplicitMT();
1553const unsigned int nSlots = df.GetNSlots();
1554std::vector<double> sumSqs(nSlots, 0.);
1555std::vector<unsigned int> ns(nSlots, 0);
1556
1557df.ForeachSlot([&sumSqs, &ns](unsigned int slot, double x) { sumSqs[slot] += x*x; ns[slot] += 1; }, {"x"});
1558double sumSq = std::accumulate(sumSqs.begin(), sumSqs.end(), 0.); // sum all squares
1559unsigned int n = std::accumulate(ns.begin(), ns.end(), 0); // sum all counts
1560std::cout << "rms of x: " << std::sqrt(sumSq / n) << std::endl;
1561~~~
1562Notice how we created one `double` variable for each processing slot and later merged their results via `std::accumulate`.
1563
1564
1565\anchor friends
1566### Dataset joins with friend trees
1567
1568Vertically concatenating multiple trees that have the same columns (creating a logical dataset with the same columns and
1569more rows) is trivial in RDataFrame: just pass the tree name and a list of file names to RDataFrame's constructor, or create a TChain
1570out of the desired trees and pass that to RDataFrame.
1571
1572Horizontal concatenations of trees or chains (creating a logical dataset with the same number of rows and the union of the
1573columns of multiple trees) leverages TTree's "friend" mechanism.
1574
1575Simple joins of trees that do not have the same number of rows are also possible with indexed friend trees (see below).
1576
1577To use friend trees in RDataFrame, set up trees with the appropriate relationships and then instantiate an RDataFrame
1578with the main tree:
1579
1580~~~{.cpp}
1581TTree main([...]);
1582TTree friend([...]);
1583main.AddFriend(&friend, "myFriend");
1584
1585RDataFrame df(main);
1586auto df2 = df.Filter("myFriend.MyCol == 42");
1587~~~
1588
1589The same applies for TChains. Columns coming from the friend trees can be referred to by their full name, like in the example above,
1590or the friend tree name can be omitted in case the column name is not ambiguous (e.g. "MyCol" could be used instead of
1591"myFriend.MyCol" in the example above if there is no column "MyCol" in the main tree).
1592
1593\note A common source of confusion is that trees that are written out from a multi-thread Snapshot() call will have their
1594 entries (block-wise) shuffled with respect to the original tree. Such trees cannot be used as friends of the original
1595 one: rows will be mismatched.
1596
1597Indexed friend trees provide a way to perform simple joins of multiple trees over a common column.
1598When a certain entry in the main tree (or chain) is loaded, the friend trees (or chains) will then load an entry where the
1599"index" columns have a value identical to the one in the main one. For example, in Python:
1600
1601~~~{.py}
1602main_tree = ...
1603aux_tree = ...
1604
1605# If a friend tree has an index on `commonColumn`, when the main tree loads
1606# a given row, it also loads the row of the friend tree that has the same
1607# value of `commonColumn`
1608aux_tree.BuildIndex("commonColumn")
1609
1610mainTree.AddFriend(aux_tree)
1611
1612df = ROOT.RDataFrame(mainTree)
1613~~~
1614
1615RDataFrame supports indexed friend TTrees from ROOT v6.24 in single-thread mode and from v6.28/02 in multi-thread mode.
1616
1617\anchor other-file-formats
1618### Reading data formats other than ROOT trees
1619RDataFrame can be interfaced with RDataSources. The ROOT::RDF::RDataSource interface defines an API that RDataFrame can use to read arbitrary columnar data formats.
1620
1621RDataFrame calls into concrete RDataSource implementations to retrieve information about the data, retrieve (thread-local) readers or "cursors" for selected columns
1622and to advance the readers to the desired data entry.
1623Some predefined RDataSources are natively provided by ROOT such as the ROOT::RDF::RCsvDS which allows to read comma separated files:
1624~~~{.cpp}
1625auto tdf = ROOT::RDF::FromCSV("MuRun2010B.csv");
1626auto filteredEvents =
1627 tdf.Filter("Q1 * Q2 == -1")
1628 .Define("m", "sqrt(pow(E1 + E2, 2) - (pow(px1 + px2, 2) + pow(py1 + py2, 2) + pow(pz1 + pz2, 2)))");
1629auto h = filteredEvents.Histo1D("m");
1630h->Draw();
1631~~~
1632
1633See also FromNumpy (Python-only), FromRNTuple(), FromArrow(), FromSqlite().
1634
1635\anchor callgraphs
1636### Computation graphs (storing and reusing sets of transformations)
1637
1638As we saw, transformed dataframes can be stored as variables and reused multiple times to create modified versions of the dataset. This implicitly defines a **computation graph** in which
1639several paths of filtering/creation of columns are executed simultaneously, and finally aggregated results are produced.
1640
1641RDataFrame detects when several actions use the same filter or the same defined column, and **only evaluates each
1642filter or defined column once per event**, regardless of how many times that result is used down the computation graph.
1643Objects read from each column are **built once and never copied**, for maximum efficiency.
1644When "upstream" filters are not passed, subsequent filters, temporary column expressions and actions are not evaluated,
1645so it might be advisable to put the strictest filters first in the graph.
1646
1647\anchor representgraph
1648### Visualizing the computation graph
1649It is possible to print the computation graph from any node to obtain a [DOT (graphviz)](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) representation either on the standard output
1650or in a file.
1651
1652Invoking the function ROOT::RDF::SaveGraph() on any node that is not the head node, the computation graph of the branch
1653the node belongs to is printed. By using the head node, the entire computation graph is printed.
1654
1655Following there is an example of usage:
1656~~~{.cpp}
1657// First, a sample computational graph is built
1658ROOT::RDataFrame df("tree", "f.root");
1659
1660auto df2 = df.Define("x", []() { return 1; })
1661 .Filter("col0 % 1 == col0")
1662 .Filter([](int b1) { return b1 <2; }, {"cut1"})
1663 .Define("y", []() { return 1; });
1664
1665auto count = df2.Count();
1666
1667// Prints the graph to the rd1.dot file in the current directory
1668ROOT::RDF::SaveGraph(df, "./mydot.dot");
1669// Prints the graph to standard output
1670ROOT::RDF::SaveGraph(df);
1671~~~
1672
1673The generated graph can be rendered using one of the graphviz filters, e.g. `dot`. For instance, the image below can be generated with the following command:
1674~~~{.sh}
1675$ dot -Tpng computation_graph.dot -ocomputation_graph.png
1676~~~
1677
1678\image html RDF_Graph2.png
1679
1680\anchor rdf-logging
1681### Activating RDataFrame execution logs
1682
1683RDataFrame has experimental support for verbose logging of the event loop runtimes and other interesting related information. It is activated as follows:
1684~~~{.cpp}
1685#include <ROOT/RLogger.hxx>
1686
1687// this increases RDF's verbosity level as long as the `verbosity` variable is in scope
1688auto verbosity = ROOT::RLogScopedVerbosity(ROOT::Detail::RDF::RDFLogChannel(), ROOT::ELogLevel::kInfo);
1689~~~
1690
1691or in Python:
1692~~~{.python}
1693import ROOT
1694
1695verbosity = ROOT.RLogScopedVerbosity(ROOT.Detail.RDF.RDFLogChannel(), ROOT.ELogLevel.kInfo)
1696~~~
1697
1698More information (e.g. start and end of each multi-thread task) is printed using `ELogLevel.kDebug` and even more
1699(e.g. a full dump of the generated code that RDataFrame just-in-time-compiles) using `ELogLevel.kDebug+10`.
1700
1701\anchor rdf-from-spec
1702### Creating an RDataFrame from a dataset specification file
1703
1704RDataFrame can be created using a dataset specification JSON file:
1705
1706~~~{.python}
1707import ROOT
1708
1709df = ROOT.RDF.Experimental.FromSpec("spec.json")
1710~~~
1711
1712The input dataset specification JSON file needs to be provided by the user and it describes all necessary samples and
1713their associated metadata information. The main required key is the "samples" (at least one sample is needed) and the
1714required sub-keys for each sample are "trees" and "files". Additionally, one can specify a metadata dictionary for each
1715sample in the "metadata" key.
1716
1717A simple example for the formatting of the specification in the JSON file is the following:
1718
1719~~~{.cpp}
1720{
1721 "samples": {
1722 "sampleA": {
1723 "trees": ["tree1", "tree2"],
1724 "files": ["file1.root", "file2.root"],
1725 "metadata": {
1726 "lumi": 10000.0,
1727 "xsec": 1.0,
1728 "sample_category" = "data"
1729 }
1730 },
1731 "sampleB": {
1732 "trees": ["tree3", "tree4"],
1733 "files": ["file3.root", "file4.root"],
1734 "metadata": {
1735 "lumi": 0.5,
1736 "xsec": 1.5,
1737 "sample_category" = "MC_background"
1738 }
1739 }
1740 }
1741}
1742~~~
1743
1744The metadata information from the specification file can be then accessed using the DefinePerSample function.
1745For example, to access luminosity information (stored as a double):
1746
1747~~~{.python}
1748df.DefinePerSample("lumi", 'rdfsampleinfo_.GetD("lumi")')
1749~~~
1750
1751or sample_category information (stored as a string):
1752
1753~~~{.python}
1754df.DefinePerSample("sample_category", 'rdfsampleinfo_.GetS("sample_category")')
1755~~~
1756
1757or directly the filename:
1758
1759~~~{.python}
1760df.DefinePerSample("name", "rdfsampleinfo_.GetSampleName()")
1761~~~
1762
1763An example implementation of the "FromSpec" method is available in tutorial: df106_HiggstoFourLeptons.py, which also
1764provides a corresponding exemplary JSON file for the dataset specification.
1765
1766\anchor progressbar
1767### Adding a progress bar
1768
1769A progress bar showing the processed event statistics can be added to any RDataFrame program.
1770The event statistics include elapsed time, currently processed file, currently processed events, the rate of event processing
1771and an estimated remaining time (per file being processed). It is recorded and printed in the terminal every m events and every
1772n seconds (by default m = 1000 and n = 1). The ProgressBar can be also added when the multithread (MT) mode is enabled.
1773
1774ProgressBar is added after creating the dataframe object (df):
1775~~~{.cpp}
1776ROOT::RDataFrame df("tree", "file.root");
1777ROOT::RDF::Experimental::AddProgressBar(df);
1778~~~
1779
1780Alternatively, RDataFrame can be cast to an RNode first, giving the user more flexibility
1781For example, it can be called at any computational node, such as Filter or Define, not only the head node,
1782with no change to the ProgressBar function itself (please see the [Python interface](classROOT_1_1RDataFrame.html#python)
1783section for appropriate usage in Python):
1784~~~{.cpp}
1785ROOT::RDataFrame df("tree", "file.root");
1786auto df_1 = ROOT::RDF::RNode(df.Filter("x>1"));
1787ROOT::RDF::Experimental::AddProgressBar(df_1);
1788~~~
1789Examples of implemented progress bars can be seen by running [Higgs to Four Lepton tutorial](https://root.cern/doc/master/df106__HiggsToFourLeptons_8py_source.html) and [Dimuon tutorial](https://root.cern/doc/master/df102__NanoAODDimuonAnalysis_8C.html).
1790
1791\anchor missing-values
1792### Working with missing values in the dataset
1793
1794In certain situations a dataset might be missing one or more values at one or
1795more of its entries. For example:
1796
1797- If the dataset is composed of multiple files and one or more files is
1798 missing one or more columns required by the analysis.
1799- When joining different datasets horizontally according to some index value
1800 (e.g. the event number), if the index does not find a match in one or more
1801 other datasets for a certain entry.
1802- If, for a certain event, a column is invalid because it results from a Snapshot
1803 with systematic variations, and that variation didn't pass its filters. For
1804 more details, see \ref snapshot-with-variations.
1805
1806For example, suppose that column "y" does not have a value for entry 42:
1807
1808\code{.unparsed}
1809+-------+---+---+
1810| Entry | x | y |
1811+-------+---+---+
1812| 42 | 1 | |
1813+-------+---+---+
1814\endcode
1815
1816If the RDataFrame application reads that column, for example if a Take() action
1817was requested, the default behaviour is to throw an exception indicating
1818that that column is missing an entry.
1819
1820The following paragraphs discuss the functionalities provided by RDataFrame to
1821work with missing values in the dataset.
1822
1823#### FilterAvailable and FilterMissing
1824
1825FilterAvailable and FilterMissing are specialized RDataFrame Filter operations.
1826They take as input argument the name of a column of the dataset to watch for
1827missing values. Like Filter, they will either keep or discard an entire entry
1828based on whether a condition returns true or false. Specifically:
1829
1830- FilterAvailable: the condition is whether the value of the column is present.
1831 If so, the entry is kept. Otherwise if the value is missing the entry is
1832 discarded.
1833- FilterMissing: the condition is whether the value of the column is missing. If
1834 so, the entry is kept. Otherwise if the value is present the entry is
1835 discarded.
1836
1837\code{.py}
1838df = ROOT.RDataFrame(dataset)
1839
1840# Anytime an entry from "col" is missing, the entire entry will be filtered out
1841df_available = df.FilterAvailable("col")
1842df_available = df_available.Define("twice", "col * 2")
1843
1844# Conversely, if we want to select the entries for which the column has missing
1845# values, we do the following
1846df_missingcol = df.FilterMissing("col")
1847# Following operations in the same branch of the computation graph clearly
1848# cannot access that same column, since there would be no value to read
1849df_missingcol = df_missingcol.Define("observable", "othercolumn * 2")
1850\endcode
1851
1852\code{.cpp}
1853ROOT::RDataFrame df{dataset};
1854
1855// Anytime an entry from "col" is missing, the entire entry will be filtered out
1856auto df_available = df.FilterAvailable("col");
1857auto df_twicecol = df_available.Define("twice", "col * 2");
1858
1859// Conversely, if we want to select the entries for which the column has missing
1860// values, we do the following
1861auto df_missingcol = df.FilterMissing("col");
1862// Following operations in the same branch of the computation graph clearly
1863// cannot access that same column, since there would be no value to read
1864auto df_observable = df_missingcol.Define("observable", "othercolumn * 2");
1865\endcode
1866
1867#### DefaultValueFor
1868
1869DefaultValueFor creates a node of the computation graph which just forwards the
1870values of the columns necessary for other downstream nodes, when they are
1871available. In case a value of the input column passed to this function is not
1872available, the node will provide the default value passed to this function call
1873instead. Example:
1874
1875\code{.py}
1876df = ROOT.RDataFrame(dataset)
1877# Anytime an entry from "col" is missing, the value will be the default one
1878default_value = ... # Some sensible default value here
1879df = df.DefaultValueFor("col", default_value)
1880df = df.Define("twice", "col * 2")
1881\endcode
1882
1883\code{.cpp}
1884ROOT::RDataFrame df{dataset};
1885// Anytime an entry from "col" is missing, the value will be the default one
1886constexpr auto default_value = ... // Some sensible default value here
1887auto df_default = df.DefaultValueFor("col", default_value);
1888auto df_col = df_default.Define("twice", "col * 2");
1889\endcode
1890
1891#### Mixing different strategies to work with missing values in the same RDataFrame
1892
1893All the operations presented above only act on the particular branch of the
1894computation graph where they are called, so that different results can be
1895obtained by mixing and matching the filtering or providing a default value
1896strategies:
1897
1898\code{.py}
1899df = ROOT.RDataFrame(dataset)
1900# Anytime an entry from "col" is missing, the value will be the default one
1901default_value = ... # Some sensible default value here
1902df_default = df.DefaultValueFor("col", default_value).Define("twice", "col * 2")
1903df_filtered = df.FilterAvailable("col").Define("twice", "col * 2")
1904
1905# Same number of total entries as the input dataset, with defaulted values
1906df_default.Display(["twice"]).Print()
1907# Only keep the entries where "col" has values
1908df_filtered.Display(["twice"]).Print()
1909\endcode
1910
1911\code{.cpp}
1912ROOT::RDataFrame df{dataset};
1913
1914// Anytime an entry from "col" is missing, the value will be the default one
1915constexpr auto default_value = ... // Some sensible default value here
1916auto df_default = df.DefaultValueFor("col", default_value).Define("twice", "col * 2");
1917auto df_filtered = df.FilterAvailable("col").Define("twice", "col * 2");
1918
1919// Same number of total entries as the input dataset, with defaulted values
1920df_default.Display({"twice"})->Print();
1921// Only keep the entries where "col" has values
1922df_filtered.Display({"twice"})->Print();
1923\endcode
1924
1925#### Further considerations
1926
1927Note that working with missing values is currently supported with a TTree-based
1928data source. Support of this functionality for other data sources may come in
1929the future.
1930
1931\anchor special-values
1932### Dealing with NaN or Inf values in the dataset
1933
1934RDataFrame does not treat NaNs or infinities beyond what the floating-point standards require, i.e. they will
1935propagate to the final result.
1936Non-finite numbers can be suppressed using Filter(), e.g.:
1937
1938\code{.py}
1939df.Filter("std::isfinite(x)").Mean("x")
1940\endcode
1941
1942\anchor rosetta-stone
1943### Translating TTree::Draw to RDataFrame
1944
1945<table>
1946<tr>
1947 <td>
1948 <b>TTree::Draw</b>
1949 </td>
1950 <td>
1951 <b>ROOT::RDataFrame</b>
1952 </td>
1953</tr>
1954<tr>
1955 <td>
1956~~~{.cpp}
1957// Get the tree and Draw a histogram of x for selected y values
1958auto *tree = file->Get<TTree>("myTree");
1959tree->Draw("x", "y > 2");
1960~~~
1961 </td>
1962 <td>
1963~~~{.cpp}
1964ROOT::RDataFrame df("myTree", file);
1965df.Filter("y > 2").Histo1D("x")->Draw();
1966~~~
1967 </td>
1968</tr>
1969<tr>
1970 <td>
1971~~~{.cpp}
1972// Draw a histogram of "jet_eta" with the desired weight
1973tree->Draw("jet_eta", "weight*(event == 1)");
1974~~~
1975 </td>
1976 <td>
1977~~~{.cpp}
1978df.Filter("event == 1").Histo1D("jet_eta", "weight")->Draw();
1979~~~
1980 </td>
1981</tr>
1982<tr>
1983 <td>
1984~~~{cpp}
1985// Draw a histogram filled with values resulting from calling a method of the class of the `event` branch in the TTree.
1986tree->Draw("event.GetNtrack()");
1987
1988~~~
1989 </td>
1990 <td>
1991~~~{cpp}
1992df.Define("NTrack","event.GetNtrack()").Histo1D("NTrack")->Draw();
1993~~~
1994 </td>
1995</tr>
1996<tr>
1997 <td>
1998~~~{cpp}
1999// Draw only every 10th event
2000tree->Draw("fNtrack","fEvtHdr.fEvtNum%10 == 0");
2001~~~
2002 </td>
2003 <td>
2004~~~{cpp}
2005// Use the Filter operation together with the special RDF column: `rdfentry_`
2006df.Filter("rdfentry_ % 10 == 0").Histo1D("fNtrack")->Draw();
2007~~~
2008 </td>
2009</tr>
2010<tr>
2011 <td>
2012~~~{cpp}
2013// object selection: for each event, fill histogram with array of selected pts
2014tree->Draw('Muon_pt', 'Muon_pt > 100');
2015~~~
2016 </td>
2017 <td>
2018~~~{cpp}
2019// with RDF, arrays are read as ROOT::VecOps::RVec objects
2020df.Define("good_pt", "Muon_pt[Muon_pt > 100]").Histo1D("good_pt")->Draw();
2021~~~
2022 </td>
2023</tr>
2024</tr>
2025<tr>
2026 <td>
2027~~~{cpp}
2028// Draw the histogram and fill hnew with it
2029tree->Draw("sqrt(x)>>hnew","y>0");
2030
2031// Retrieve hnew from the current directory
2032auto hnew = gDirectory->Get<TH1F>("hnew");
2033~~~
2034 </td>
2035 <td>
2036~~~{cpp}
2037// We pass histogram constructor arguments to the Histo1D operation, to easily give the histogram a name
2038auto hist = df.Define("sqrt_x", "sqrt(x)").Filter("y>0").Histo1D({"hnew","hnew", 10, 0, 10}, "sqrt_x");
2039~~~
2040 </td>
2041</tr>
2042<tr>
2043 <td>
2044~~~{cpp}
2045// Draw a 1D Profile histogram instead of TH2F
2046tree->Draw("y:x","","prof");
2047
2048// Draw a 2D Profile histogram instead of TH3F
2049tree->Draw("z:y:x","","prof");
2050~~~
2051 </td>
2052 <td>
2053~~~{cpp}
2054
2055// Draw a 1D Profile histogram
2056df.Profile1D("x", "y")->Draw();
2057
2058// Draw a 2D Profile histogram
2059df.Profile2D("x", "y", "z")->Draw();
2060~~~
2061 </td>
2062</tr>
2063<tr>
2064 <td>
2065~~~{cpp}
2066// This command draws 2 entries starting with entry 5
2067tree->Draw("x", "","", 2, 5);
2068~~~
2069 </td>
2070 <td>
2071~~~{cpp}
2072// Range function with arguments begin, end
2073df.Range(5,7).Histo1D("x")->Draw();
2074~~~
2075 </td>
2076</tr>
2077<tr>
2078 <td>
2079~~~{cpp}
2080// Draw the X() component of the
2081// ROOT::Math::DisplacementVector3D in vec_list
2082tree->Draw("vec_list.X()");
2083~~~
2084 </td>
2085 <td>
2086~~~{cpp}
2087df.Define("x", "ROOT::RVecD out; for(const auto &el: vec_list) out.push_back(el.X()); return out;").Histo1D("x")->Draw();
2088~~~
2089 </td>
2090</tr>
2091<tr>
2092 <td>
2093~~~{cpp}
2094// Gather all values from a branch holding a collection per event, `pt`,
2095// and fill a histogram so that we can count the total number of values across all events
2096tree->Draw("pt>>histo");
2097auto histo = gDirectory->Get<TH1D>("histo");
2098histo->GetEntries();
2099~~~
2100 </td>
2101 <td>
2102~~~{cpp}
2103df.Histo1D("pt")->GetEntries();
2104~~~
2105 </td>
2106</tr>
2107</table>
2108
2109*/
2110// clang-format on
2111
2112namespace ROOT {
2113
2115using ColumnNamesPtr_t = std::shared_ptr<const ColumnNames_t>;
2116
2117////////////////////////////////////////////////////////////////////////////
2118/// \brief Build the dataframe.
2119/// \param[in] treeName Name of the tree contained in the directory
2120/// \param[in] dirPtr TDirectory where the tree is stored, e.g. a TFile.
2121/// \param[in] defaultColumns Collection of default columns.
2122///
2123/// The default columns are looked at in case no column is specified in the
2124/// booking of actions or transformations.
2125/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2126RDataFrame::RDataFrame(std::string_view treeName, TDirectory *dirPtr, const ColumnNames_t &defaultColumns)
2127 : RInterface(std::make_shared<RDFDetail::RLoopManager>(
2128 std::make_unique<ROOT::Internal::RDF::RTTreeDS>(treeName, dirPtr), defaultColumns))
2129{
2130}
2131
2132////////////////////////////////////////////////////////////////////////////
2133/// \brief Build the dataframe.
2134/// \param[in] treeName Name of the tree contained in the directory
2135/// \param[in] fileNameGlob TDirectory where the tree is stored, e.g. a TFile.
2136/// \param[in] defaultColumns Collection of default columns.
2137///
2138/// The filename glob supports the same type of expressions as TChain::Add(), and it is passed as-is to TChain's
2139/// constructor.
2140///
2141/// The default columns are looked at in case no column is specified in the
2142/// booking of actions or transformations.
2143/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2144RDataFrame::RDataFrame(std::string_view treeName, std::string_view fileNameGlob, const ColumnNames_t &defaultColumns)
2145 : RInterface(ROOT::Detail::RDF::CreateLMFromFile(treeName, fileNameGlob, defaultColumns))
2146{
2147}
2148
2149////////////////////////////////////////////////////////////////////////////
2150/// \brief Build the dataframe.
2151/// \param[in] datasetName Name of the dataset contained in the directory
2152/// \param[in] fileNameGlobs Collection of file names of filename globs
2153/// \param[in] defaultColumns Collection of default columns.
2154///
2155/// The filename globs support the same type of expressions as TChain::Add(), and each glob is passed as-is
2156/// to TChain's constructor.
2157///
2158/// The default columns are looked at in case no column is specified in the booking of actions or transformations.
2159/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2160RDataFrame::RDataFrame(std::string_view datasetName, const std::vector<std::string> &fileNameGlobs,
2162 : RInterface(ROOT::Detail::RDF::CreateLMFromFile(datasetName, fileNameGlobs, defaultColumns))
2163{
2164}
2165
2166////////////////////////////////////////////////////////////////////////////
2167/// \brief Build the dataframe.
2168/// \param[in] tree The tree or chain to be studied.
2169/// \param[in] defaultColumns Collection of default column names to fall back to when none is specified.
2170///
2171/// The default columns are looked at in case no column is specified in the
2172/// booking of actions or transformations.
2173/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2178
2179//////////////////////////////////////////////////////////////////////////
2180/// \brief Build a dataframe that generates numEntries entries.
2181/// \param[in] numEntries The number of entries to generate.
2182///
2183/// An empty-source dataframe constructed with a number of entries will
2184/// generate those entries on the fly when some action is triggered,
2185/// and it will do so for all the previously-defined columns.
2186/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2188 : RInterface(std::make_shared<RDFDetail::RLoopManager>(numEntries))
2189
2190{
2191}
2192
2193//////////////////////////////////////////////////////////////////////////
2194/// \brief Build dataframe associated to data source.
2195/// \param[in] ds The data source object.
2196/// \param[in] defaultColumns Collection of default column names to fall back to when none is specified.
2197///
2198/// A dataframe associated to a data source will query it to access column values.
2199/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
2200RDataFrame::RDataFrame(std::unique_ptr<ROOT::RDF::RDataSource> ds, const ColumnNames_t &defaultColumns)
2201 : RInterface(std::make_shared<RDFDetail::RLoopManager>(std::move(ds), defaultColumns))
2202{
2203}
2204
2205//////////////////////////////////////////////////////////////////////////
2206/// \brief Build dataframe from an RDatasetSpec object.
2207/// \param[in] spec The dataset specification object.
2208///
2209/// A dataset specification includes trees and file names,
2210/// as well as an optional friend list and/or entry range.
2211///
2212/// ### Example usage from Python:
2213/// ~~~{.py}
2214/// spec = (
2215/// ROOT.RDF.Experimental.RDatasetSpec()
2216/// .AddSample(("data", "tree", "file.root"))
2217/// .WithGlobalFriends("friendTree", "friend.root", "alias")
2218/// .WithGlobalRange((100, 200))
2219/// )
2220/// df = ROOT.RDataFrame(spec)
2221/// ~~~
2222///
2223/// See also ROOT::RDataFrame::FromSpec().
2228
2230{
2231 // If any node of the computation graph associated with this RDataFrame
2232 // declared code to jit, we need to make sure the compilation actually
2233 // happens. For example, a jitted Define could have been booked but
2234 // if the computation graph is not actually run then the code of the
2235 // Define node is not jitted. This in turn would cause memory leaks.
2236 // See https://github.com/root-project/root/issues/15399
2237 fLoopManager->Jit();
2238}
2239
2240namespace RDF {
2241namespace Experimental {
2242
2243////////////////////////////////////////////////////////////////////////////
2244/// \brief Create the RDataFrame from the dataset specification file.
2245/// \param[in] jsonFile Path to the dataset specification JSON file.
2246///
2247/// The input dataset specification JSON file must include a number of keys that
2248/// describe all the necessary samples and their associated metadata information.
2249/// The main key, "samples", is required and at least one sample is needed. Each
2250/// sample must have at least one key "trees" and at least one key "files" from
2251/// which the data is read. Optionally, one or more metadata information can be
2252/// added, as well as the friend list information.
2253///
2254/// ### Example specification file JSON:
2255/// The following is an example of the dataset specification JSON file formatting:
2256///~~~{.cpp}
2257/// {
2258/// "samples": {
2259/// "sampleA": {
2260/// "trees": ["tree1", "tree2"],
2261/// "files": ["file1.root", "file2.root"],
2262/// "metadata": {"lumi": 1.0, }
2263/// },
2264/// "sampleB": {
2265/// "trees": ["tree3", "tree4"],
2266/// "files": ["file3.root", "file4.root"],
2267/// "metadata": {"lumi": 0.5, }
2268/// },
2269/// ...
2270/// },
2271/// }
2272///~~~
2277
2278} // namespace Experimental
2279} // namespace RDF
2280
2281} // namespace ROOT
2282
2283namespace cling {
2284//////////////////////////////////////////////////////////////////////////
2285/// Print an RDataFrame at the prompt
2286std::string printValue(ROOT::RDataFrame *df)
2287{
2288 // The loop manager is never null, except when its construction failed.
2289 // This can happen e.g. if the constructor of RLoopManager that expects
2290 // a file name is used and that file doesn't exist. This point is usually
2291 // not even reached in that situation, since the exception thrown by the
2292 // constructor will also stop execution of the program. But it can still
2293 // be reached at the prompt, if the user tries to print the RDataFrame
2294 // variable after an incomplete initialization.
2295 auto *lm = df->GetLoopManager();
2296 if (!lm) {
2297 throw std::runtime_error("Cannot print information about this RDataFrame, "
2298 "it was not properly created. It must be discarded.");
2299 }
2300 auto defCols = lm->GetDefaultColumnNames();
2301
2302 std::ostringstream ret;
2303 if (auto ds = df->GetDataSource()) {
2304 ret << "A data frame associated to the data source \"" << cling::printValue(ds) << "\"";
2305 } else {
2306 ret << "An empty data frame that will create " << lm->GetNEmptyEntries() << " entries\n";
2307 }
2308
2309 return ret.str();
2310}
2311} // namespace cling
Basic types used by ROOT and required by TInterpreter.
unsigned long long ULong64_t
Portable unsigned long integer 8 bytes.
Definition RtypesCore.h:84
ROOT::Detail::TRangeCast< T, true > TRangeDynCast
TRangeDynCast is an adapter class that allows the typed iteration through a TCollection.
The head node of a RDF computation graph.
The dataset specification for RDataFrame.
std::shared_ptr< ROOT::Detail::RDF::RLoopManager > fLoopManager
< The RLoopManager at the root of this computation graph. Never null.
ROOT's RDataFrame offers a modern, high-level interface for analysis of data stored in TTree ,...
RDataFrame(std::string_view treeName, std::string_view filenameglob, const ColumnNames_t &defaultColumns={})
Build the dataframe.
ROOT::RDF::ColumnNames_t ColumnNames_t
Describe directory structure in memory.
Definition TDirectory.h:45
A TTree represents a columnar dataset.
Definition TTree.h:89
ROOT::RDF::Experimental::RDatasetSpec RetrieveSpecFromJson(const std::string &jsonFile)
Function to retrieve RDatasetSpec from JSON file provided.
Definition RDFUtils.cxx:545
ROOT::RDataFrame FromSpec(const std::string &jsonFile)
Factory method to create an RDataFrame from a JSON specification file.
std::vector< std::string > ColumnNames_t
std::shared_ptr< const ColumnNames_t > ColumnNamesPtr_t