Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
RDataFrame.cxx
Go to the documentation of this file.
1// Author: Enrico Guiraud, Danilo Piparo CERN 12/2016
2
3/*************************************************************************
4 * Copyright (C) 1995-2018, Rene Brun and Fons Rademakers. *
5 * All rights reserved. *
6 * *
7 * For the licensing terms see $ROOTSYS/LICENSE. *
8 * For the list of contributors see $ROOTSYS/README/CREDITS. *
9 *************************************************************************/
10
12#include "ROOT/RDataFrame.hxx"
13#include "ROOT/RDataSource.hxx"
17#include "ROOT/RDF/Utils.hxx"
18#include <string_view>
19#include "TChain.h"
20#include "TDirectory.h"
21#include "RtypesCore.h" // for ULong64_t
22#include "TTree.h"
23
24#include <fstream> // std::ifstream
25#include <nlohmann/json.hpp> // nlohmann::json::parse
26#include <memory> // for make_shared, allocator, shared_ptr
27#include <ostream> // ostringstream
28#include <stdexcept>
29#include <string>
30#include <vector>
31
32// clang-format off
33/**
34* \class ROOT::RDataFrame
35* \ingroup dataframe
36* \brief ROOT's RDataFrame offers a modern, high-level interface for analysis of data stored in TTree , CSV and other data formats, in C++ or Python.
37
38In addition, multi-threading and other low-level optimisations allow users to exploit all the resources available
39on their machines completely transparently.<br>
40Skip to the [class reference](#reference) or keep reading for the user guide.
41
42In a nutshell:
43~~~{.cpp}
44ROOT::EnableImplicitMT(); // Tell ROOT you want to go parallel
45ROOT::RDataFrame d("myTree", "file_*.root"); // Interface to TTree and TChain
46auto myHisto = d.Histo1D("Branch_A"); // This books the (lazy) filling of a histogram
47myHisto->Draw(); // Event loop is run here, upon first access to a result
48~~~
49
50Calculations are expressed in terms of a type-safe *functional chain of actions and transformations*, RDataFrame takes
51care of their execution. The implementation automatically puts in place several low level optimisations such as
52multi-thread parallelization and caching.
53
54\htmlonly
55<a href="https://doi.org/10.5281/zenodo.260230"><img src="https://zenodo.org/badge/DOI/10.5281/zenodo.260230.svg"
56alt="DOI"></a>
57\endhtmlonly
58
59## For the impatient user
60You can directly see RDataFrame in action in our [tutorials](https://root.cern/doc/master/group__tutorial__dataframe.html), in C++ or Python.
61
62## Table of Contents
63- [Cheat sheet](\ref cheatsheet)
64- [Introduction](\ref rdf_intro)
65- [Crash course](\ref crash-course)
66- [Working with collections](\ref working_with_collections)
67- [Transformations: manipulating data](\ref transformations)
68- [Actions: getting results](\ref actions)
69- [Distributed execution in Python](\ref rdf_distrdf)
70- [Performance tips and parallel execution](\ref parallel-execution)
71- [More features](\ref more-features)
72 - [Systematic variations](\ref systematics)
73 - [RDataFrame objects as function arguments and return values](\ref rnode)
74 - [Storing RDataFrame objects in collections](\ref RDFCollections)
75 - [Executing callbacks every N events](\ref callbacks)
76 - [Default column lists](\ref default-branches)
77 - [Special helper columns: `rdfentry_` and `rdfslot_`](\ref helper-cols)
78 - [Just-in-time compilation: column type inference and explicit declaration of column types](\ref jitting)
79 - [User-defined custom actions](\ref generic-actions)
80 - [Dataset joins with friend trees](\ref friends)
81 - [Reading data formats other than ROOT trees](\ref other-file-formats)
82 - [Computation graphs (storing and reusing sets of transformations)](\ref callgraphs)
83 - [Visualizing the computation graph](\ref representgraph)
84 - [Activating RDataFrame execution logs](\ref rdf-logging)
85 - [Creating an RDataFrame from a dataset specification file](\ref rdf-from-spec)
86 - [Adding a progress bar](\ref progressbar)
87 - [Working with missing values in the dataset](\ref missing-values)
88- [Python interface](classROOT_1_1RDataFrame.html#python)
89- <a class="el" href="classROOT_1_1RDataFrame.html#reference" onclick="javascript:toggleInherit('pub_methods_classROOT_1_1RDF_1_1RInterface')">Class reference</a>
90
91\anchor cheatsheet
92## Cheat sheet
93These are the operations which can be performed with RDataFrame.
94
95### Transformations
96Transformations are a way to manipulate the data.
97
98| **Transformation** | **Description** |
99|------------------|--------------------|
100| Alias() | Introduce an alias for a particular column name. |
101| DefaultValueFor() | If the value of the input column is missing, provide a default value instead. |
102| Define() | Create a new column in the dataset. Example usages include adding a column that contains the invariant mass of a particle, or a selection of elements of an array (e.g. only the `pt`s of "good" muons). |
103| DefinePerSample() | Define a new column that is updated when the input sample changes, e.g. when switching tree being processed in a chain. |
104| DefineSlot() | Same as Define(), but the user-defined function must take an extra `unsigned int slot` as its first parameter. `slot` will take a different value, `0` to `nThreads - 1`, for each thread of execution. This is meant as a helper in writing thread-safe Define() transformation when using RDataFrame after ROOT::EnableImplicitMT(). DefineSlot() works just as well with single-thread execution: in that case `slot` will always be `0`. |
105| DefineSlotEntry() | Same as DefineSlot(), but the entry number is passed in addition to the slot number. This is meant as a helper in case the expression depends on the entry number. For details about entry numbers in multi-threaded runs, see [here](\ref helper-cols). |
106| Filter() | Filter rows based on user-defined conditions. |
107| FilterAvailable() | Specialized Filter. If the value of the input column is available, keep the entry, otherwise discard it. |
108| FilterMissing() | Specialized Filter. If the value of the input column is missing, keep the entry, otherwise discard it. |
109| Range() | Filter rows based on entry number (single-thread only). |
110| Redefine() | Overwrite the value and/or type of an existing column. See Define() for more information. |
111| RedefineSlot() | Overwrite the value and/or type of an existing column. See DefineSlot() for more information. |
112| RedefineSlotEntry() | Overwrite the value and/or type of an existing column. See DefineSlotEntry() for more information. |
113| Vary() | Register systematic variations for an existing column. Varied results are then extracted via VariationsFor(). |
114
115
116### Actions
117Actions aggregate data into a result. Each one is described in more detail in the reference guide.
118
119In the following, whenever we say an action "returns" something, we always mean it returns a smart pointer to it. Actions only act on events that pass all preceding filters.
120
121Lazy actions only trigger the event loop when one of the results is accessed for the first time, making it easy to
122produce many different results in one event loop. Instant actions trigger the event loop instantly.
123
124
125| **Lazy action** | **Description** |
126|------------------|-----------------|
127| Aggregate() | Execute a user-defined accumulation operation on the processed column values. |
128| Book() | Book execution of a custom action using a user-defined helper object. |
129| Cache() | Cache column values in memory. Custom columns can be cached as well, filtered entries are not cached. Users can specify which columns to save (default is all). |
130| Count() | Return the number of events processed. Useful e.g. to get a quick count of the number of events passing a Filter. |
131| Display() | Provides a printable representation of the dataset contents. The method returns a ROOT::RDF::RDisplay() instance which can print a tabular representation of the data or return it as a string. |
132| Fill() | Fill a user-defined object with the values of the specified columns, as if by calling `Obj.Fill(col1, col2, ...)`. |
133| Graph() | Fills a TGraph with the two columns provided. If multi-threading is enabled, the order of the points may not be the one expected, it is therefore suggested to sort if before drawing. |
134| GraphAsymmErrors() | Fills a TGraphAsymmErrors. Should be used for any type of graph with errors, including cases with errors on one of the axes only. If multi-threading is enabled, the order of the points may not be the one expected, it is therefore suggested to sort if before drawing. |
135| Histo1D(), Histo2D(), Histo3D() | Fill a one-, two-, three-dimensional histogram with the processed column values. |
136| HistoND() | Fill an N-dimensional histogram with the processed column values. |
137| Max() | Return the maximum of processed column values. If the type of the column is inferred, the return type is `double`, the type of the column otherwise.|
138| Mean() | Return the mean of processed column values.|
139| Min() | Return the minimum of processed column values. If the type of the column is inferred, the return type is `double`, the type of the column otherwise.|
140| Profile1D(), Profile2D() | Fill a one- or two-dimensional profile with the column values that passed all filters. |
141| Reduce() | Reduce (e.g. sum, merge) entries using the function (lambda, functor...) passed as argument. The function must have signature `T(T,T)` where `T` is the type of the column. Return the final result of the reduction operation. An optional parameter allows initialization of the result object to non-default values. |
142| Report() | Obtain statistics on how many entries have been accepted and rejected by the filters. See the section on [named filters](#named-filters-and-cutflow-reports) for a more detailed explanation. The method returns a ROOT::RDF::RCutFlowReport instance which can be queried programmatically to get information about the effects of the individual cuts. |
143| Stats() | Return a TStatistic object filled with the input columns. |
144| StdDev() | Return the unbiased standard deviation of the processed column values. |
145| Sum() | Return the sum of the values in the column. If the type of the column is inferred, the return type is `double`, the type of the column otherwise. |
146| Take() | Extract a column from the dataset as a collection of values, e.g. a `std::vector<float>` for a column of type `float`. |
147
148| **Instant action** | **Description** |
149|---------------------|-----------------|
150| Foreach() | Execute a user-defined function on each entry. Users are responsible for the thread-safety of this callable when executing with implicit multi-threading enabled. |
151| ForeachSlot() | Same as Foreach(), but the user-defined function must take an extra `unsigned int slot` as its first parameter. `slot` will take a different value, `0` to `nThreads - 1`, for each thread of execution. This is meant as a helper in writing thread-safe Foreach() actions when using RDataFrame after ROOT::EnableImplicitMT(). ForeachSlot() works just as well with single-thread execution: in that case `slot` will always be `0`. |
152| Snapshot() | Write the processed dataset to disk, in a new TTree and TFile. Custom columns can be saved as well, filtered entries are not saved. Users can specify which columns to save (default is all). Snapshot, by default, overwrites the output file if it already exists. Snapshot() can be made *lazy* setting the appropriate flag in the snapshot options.|
153
154
155### Queries
156
157These operations do not modify the dataframe or book computations but simply return information on the RDataFrame object.
158
159| **Operation** | **Description** |
160|---------------------|-----------------|
161| Describe() | Get useful information describing the dataframe, e.g. columns and their types. |
162| GetColumnNames() | Get the names of all the available columns of the dataset. |
163| GetColumnType() | Return the type of a given column as a string. |
164| GetColumnTypeNamesList() | Return the list of type names of columns in the dataset. |
165| GetDefinedColumnNames() | Get the names of all the defined columns. |
166| GetFilterNames() | Return the names of all filters in the computation graph. |
167| GetNRuns() | Return the number of event loops run by this RDataFrame instance so far. |
168| GetNSlots() | Return the number of processing slots that RDataFrame will use during the event loop (i.e. the concurrency level). |
169| SaveGraph() | Store the computation graph of an RDataFrame in [DOT format (graphviz)](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) for easy inspection. See the [relevant section](\ref representgraph) for details. |
170
171\anchor rdf_intro
172## Introduction
173Users define their analysis as a sequence of operations to be performed on the dataframe object; the framework
174takes care of the management of the loop over entries as well as low-level details such as I/O and parallelization.
175RDataFrame provides methods to perform most common operations required by ROOT analyses;
176at the same time, users can just as easily specify custom code that will be executed in the event loop.
177
178RDataFrame is built with a *modular* and *flexible* workflow in mind, summarised as follows:
179
1801. Construct a dataframe object by specifying a dataset. RDataFrame supports TTree as well as TChain, [CSV files](https://root.cern/doc/master/df014__CSVDataSource_8C.html), [SQLite files](https://root.cern/doc/master/df027__SQliteDependencyOverVersion_8C.html), [RNTuples](https://root.cern/doc/master/structROOT_1_1Experimental_1_1RNTuple.html), and it can be extended to custom data formats. From Python, [NumPy arrays can be imported into RDataFrame](https://root.cern/doc/master/df032__MakeNumpyDataFrame_8py.html) as well.
181
1822. Transform the dataframe by:
183
184 - [Applying filters](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#transformations). This selects only specific rows of the dataset.
185
186 - [Creating custom columns](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#transformations). Custom columns can, for example, contain the results of a computation that must be performed for every row of the dataset.
187
1883. [Produce results](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#actions). *Actions* are used to aggregate data into results. Most actions are *lazy*, i.e. they are not executed on the spot, but registered with RDataFrame and executed only when a result is accessed for the first time.
189
190Make sure to book all transformations and actions before you access the contents of any of the results. This lets RDataFrame accumulate work and then produce all results at the same time, upon first access to any of them.
191
192The following table shows how analyses based on TTreeReader and TTree::Draw() translate to RDataFrame. Follow the
193[crash course](#crash-course) to discover more idiomatic and flexible ways to express analyses with RDataFrame.
194<table>
195<tr>
196 <td>
197 <b>TTreeReader</b>
198 </td>
199 <td>
200 <b>ROOT::RDataFrame</b>
201 </td>
202</tr>
203<tr>
204 <td>
205~~~{.cpp}
206TTreeReader reader("myTree", file);
207TTreeReaderValue<A_t> a(reader, "A");
208TTreeReaderValue<B_t> b(reader, "B");
209TTreeReaderValue<C_t> c(reader, "C");
210while(reader.Next()) {
211 if(IsGoodEvent(*a, *b, *c))
212 DoStuff(*a, *b, *c);
213}
214~~~
215 </td>
216 <td>
217~~~{.cpp}
218ROOT::RDataFrame d("myTree", file, {"A", "B", "C"});
219d.Filter(IsGoodEvent).Foreach(DoStuff);
220~~~
221 </td>
222</tr>
223<tr>
224 <td>
225 <b>TTree::Draw</b>
226 </td>
227 <td>
228 <b>ROOT::RDataFrame</b>
229 </td>
230</tr>
231<tr>
232 <td>
233~~~{.cpp}
234auto *tree = file->Get<TTree>("myTree");
235tree->Draw("x", "y > 2");
236~~~
237 </td>
238 <td>
239~~~{.cpp}
240ROOT::RDataFrame df("myTree", file);
241auto h = df.Filter("y > 2").Histo1D("x");
242h->Draw()
243~~~
244 </td>
245</tr>
246<tr>
247 <td>
248~~~{.cpp}
249tree->Draw("jet_eta", "weight*(event == 1)");
250~~~
251 </td>
252 <td>
253~~~{.cpp}
254df.Filter("event == 1").Histo1D("jet_eta", "weight");
255// or the fully compiled version:
256df.Filter([] (ULong64_t e) { return e == 1; }, {"event"}).Histo1D<RVec<float>>("jet_eta", "weight");
257~~~
258 </td>
259</tr>
260<tr>
261 <td>
262~~~{cpp}
263// object selection: for each event, fill histogram with array of selected pts
264tree->Draw('Muon_pt', 'Muon_pt > 100')
265~~~
266 </td>
267 <td>
268~~~{cpp}
269// with RDF, arrays are read as ROOT::VecOps::RVec objects
270df.Define("good_pt", "Muon_pt[Muon_pt > 100]").Histo1D("good_pt")
271~~~
272 </td>
273</tr>
274</table>
275
276\anchor crash-course
277## Crash course
278All snippets of code presented in the crash course can be executed in the ROOT interpreter. Simply precede them with
279~~~{.cpp}
280using namespace ROOT; // RDataFrame's namespace
281~~~
282which is omitted for brevity. The terms "column" and "branch" are used interchangeably.
283
284### Creating an RDataFrame
285RDataFrame's constructor is where the user specifies the dataset and, optionally, a default set of columns that
286operations should work with. Here are the most common methods to construct an RDataFrame object:
287~~~{.cpp}
288// single file -- all constructors are equivalent
289TFile *f = TFile::Open("file.root");
290TTree *t = f.Get<TTree>("treeName");
291
292RDataFrame d1("treeName", "file.root");
293RDataFrame d2("treeName", f); // same as TTreeReader
294RDataFrame d3(*t);
295
296// multiple files -- all constructors are equivalent
297TChain chain("myTree");
298chain.Add("file1.root");
299chain.Add("file2.root");
300
301RDataFrame d4("myTree", {"file1.root", "file2.root"});
302std::vector<std::string> files = {"file1.root", "file2.root"};
303RDataFrame d5("myTree", files);
304RDataFrame d6("myTree", "file*.root"); // the glob is passed as-is to TChain's constructor
305RDataFrame d7(chain);
306~~~
307Additionally, users can construct an RDataFrame with no data source by passing an integer number. This is the number of rows that
308will be generated by this RDataFrame.
309~~~{.cpp}
310RDataFrame d(10); // a RDF with 10 entries (and no columns/branches, for now)
311d.Foreach([] { static int i = 0; std::cout << i++ << std::endl; }); // silly example usage: count to ten
312~~~
313This is useful to generate simple datasets on the fly: the contents of each event can be specified with Define() (explained below). For example, we have used this method to generate [Pythia](https://pythia.org/) events and write them to disk in parallel (with the Snapshot action).
314
315For data sources other than TTrees and TChains, RDataFrame objects are constructed using ad-hoc factory functions (see e.g. FromCSV(), FromSqlite(), FromArrow()):
316
317~~~{.cpp}
318auto df = ROOT::RDF::FromCSV("input.csv");
319// use df as usual
320~~~
321
322### Filling a histogram
323Let's now tackle a very common task, filling a histogram:
324~~~{.cpp}
325// Fill a TH1D with the "MET" branch
326RDataFrame d("myTree", "file.root");
327auto h = d.Histo1D("MET");
328h->Draw();
329~~~
330The first line creates an RDataFrame associated to the TTree "myTree". This tree has a branch named "MET".
331
332Histo1D() is an *action*; it returns a smart pointer (a ROOT::RDF::RResultPtr, to be precise) to a TH1D histogram filled
333with the `MET` of all events. If the quantity stored in the column is a collection (e.g. a vector or an array), the
334histogram is filled with all vector elements for each event.
335
336You can use the objects returned by actions as if they were pointers to the desired results. There are many other
337possible [actions](\ref cheatsheet), and all their results are wrapped in smart pointers; we'll see why in a minute.
338
339### Applying a filter
340Let's say we want to cut over the value of branch "MET" and count how many events pass this cut. This is one way to do it:
341~~~{.cpp}
342RDataFrame d("myTree", "file.root");
343auto c = d.Filter("MET > 4.").Count(); // computations booked, not run
344std::cout << *c << std::endl; // computations run here, upon first access to the result
345~~~
346The filter string (which must contain a valid C++ expression) is applied to the specified columns for each event;
347the name and types of the columns are inferred automatically. The string expression is required to return a `bool`
348which signals whether the event passes the filter (`true`) or not (`false`).
349
350You can think of your data as "flowing" through the chain of calls, being transformed, filtered and finally used to
351perform actions. Multiple Filter() calls can be chained one after another.
352
353Using string filters is nice for simple things, but they are limited to specifying the equivalent of a single return
354statement or the body of a lambda, so it's cumbersome to use strings with more complex filters. They also add a small
355runtime overhead, as ROOT needs to just-in-time compile the string into C++ code. When more freedom is required or
356runtime performance is very important, a C++ callable can be specified instead (a lambda in the following snippet,
357but it can be any kind of function or even a functor class), together with a list of column names.
358This snippet is analogous to the one above:
359~~~{.cpp}
360RDataFrame d("myTree", "file.root");
361auto metCut = [](double x) { return x > 4.; }; // a C++11 lambda function checking "x > 4"
362auto c = d.Filter(metCut, {"MET"}).Count();
363std::cout << *c << std::endl;
364~~~
365
366An example of a more complex filter expressed as a string containing C++ code is shown below
367
368~~~{.cpp}
369RDataFrame d("myTree", "file.root");
370auto df = d.Define("p", "std::array<double, 4> p{px, py, pz}; return p;")
371 .Filter("double p2 = 0.0; for (auto&& x : p) p2 += x*x; return sqrt(p2) < 10.0;");
372~~~
373
374The code snippet above defines a column `p` that is a fixed-size array using the component column names and then
375filters on its magnitude by looping over its elements. It must be noted that the usage of strings to define columns
376like the one above is currently the only possibility when using PyROOT. When writing expressions as such, only constants
377and data coming from other columns in the dataset can be involved in the code passed as a string. Local variables and
378functions cannot be used, since the interpreter will not know how to find them. When capturing local state is necessary,
379it must first be declared to the ROOT C++ interpreter.
380
381More information on filters and how to use them to automatically generate cutflow reports can be found [below](#Filters).
382
383### Defining custom columns
384Let's now consider the case in which "myTree" contains two quantities "x" and "y", but our analysis relies on a derived
385quantity `z = sqrt(x*x + y*y)`. Using the Define() transformation, we can create a new column in the dataset containing
386the variable "z":
387~~~{.cpp}
388RDataFrame d("myTree", "file.root");
389auto sqrtSum = [](double x, double y) { return sqrt(x*x + y*y); };
390auto zMean = d.Define("z", sqrtSum, {"x","y"}).Mean("z");
391std::cout << *zMean << std::endl;
392~~~
393Define() creates the variable "z" by applying `sqrtSum` to "x" and "y". Later in the chain of calls we refer to
394variables created with Define() as if they were actual tree branches/columns, but they are evaluated on demand, at most
395once per event. As with filters, Define() calls can be chained with other transformations to create multiple custom
396columns. Define() and Filter() transformations can be concatenated and intermixed at will.
397
398As with filters, it is possible to specify new columns as string expressions. This snippet is analogous to the one above:
399~~~{.cpp}
400RDataFrame d("myTree", "file.root");
401auto zMean = d.Define("z", "sqrt(x*x + y*y)").Mean("z");
402std::cout << *zMean << std::endl;
403~~~
404
405Again the names of the columns used in the expression and their types are inferred automatically. The string must be
406valid C++ and it is just-in-time compiled. The process has a small runtime overhead and like with filters it is currently the only possible approach when using PyROOT.
407
408Previously, when showing the different ways an RDataFrame can be created, we showed a constructor that takes a
409number of entries as a parameter. In the following example we show how to combine such an "empty" RDataFrame with Define()
410transformations to create a dataset on the fly. We then save the generated data on disk using the Snapshot() action.
411~~~{.cpp}
412RDataFrame d(100); // an RDF that will generate 100 entries (currently empty)
413int x = -1;
414auto d_with_columns = d.Define("x", [&x] { return ++x; })
415 .Define("xx", [&x] { return x*x; });
416d_with_columns.Snapshot("myNewTree", "newfile.root");
417~~~
418This example is slightly more advanced than what we have seen so far. First, it makes use of lambda captures (a
419simple way to make external variables available inside the body of C++ lambdas) to act on the same variable `x` from
420both Define() transformations. Second, we have *stored* the transformed dataframe in a variable. This is always
421possible, since at each point of the transformation chain users can store the status of the dataframe for further use (more
422on this [below](#callgraphs)).
423
424You can read more about defining new columns [here](#custom-columns).
425
426\image html RDF_Graph.png "A graph composed of two branches, one starting with a filter and one with a define. The end point of a branch is always an action."
427
428
429### Running on a range of entries
430It is sometimes necessary to limit the processing of the dataset to a range of entries. For this reason, the RDataFrame
431offers the concept of ranges as a node of the RDataFrame chain of transformations; this means that filters, columns and
432actions can be concatenated to and intermixed with Range()s. If a range is specified after a filter, the range will act
433exclusively on the entries passing the filter -- it will not even count the other entries! The same goes for a Range()
434hanging from another Range(). Here are some commented examples:
435~~~{.cpp}
436RDataFrame d("myTree", "file.root");
437// Here we store a dataframe that loops over only the first 30 entries in a variable
438auto d30 = d.Range(30);
439// This is how you pick all entries from 15 onwards
440auto d15on = d.Range(15, 0);
441// We can specify a stride too, in this case we pick an event every 3
442auto d15each3 = d.Range(0, 15, 3);
443~~~
444Note that ranges are not available when multi-threading is enabled. More information on ranges is available
445[here](#ranges).
446
447### Executing multiple actions in the same event loop
448As a final example let us apply two different cuts on branch "MET" and fill two different histograms with the "pt_v" of
449the filtered events.
450By now, you should be able to easily understand what is happening:
451~~~{.cpp}
452RDataFrame d("treeName", "file.root");
453auto h1 = d.Filter("MET > 10").Histo1D("pt_v");
454auto h2 = d.Histo1D("pt_v");
455h1->Draw(); // event loop is run once here
456h2->Draw("SAME"); // no need to run the event loop again
457~~~
458RDataFrame executes all above actions by **running the event-loop only once**. The trick is that actions are not
459executed at the moment they are called, but they are **lazy**, i.e. delayed until the moment one of their results is
460accessed through the smart pointer. At that time, the event loop is triggered and *all* results are produced
461simultaneously.
462
463### Properly exploiting RDataFrame laziness
464
465For yet another example of the difference between the correct and incorrect running of the event-loop, see the following
466two code snippets. We assume our ROOT file has branches a, b and c.
467
468The correct way - the dataset is only processed once.
469~~~{.py}
470df_correct = ROOT.RDataFrame(treename, filename);
471
472h_a = df_correct.Histo1D("a")
473h_b = df_correct.Histo1D("b")
474h_c = df_correct.Histo1D("c")
475
476h_a_val = h_a.GetValue()
477h_b_val = h_b.GetValue()
478h_c_val = h_c.GetValue()
479
480print(f"How many times was the data set processed? {df_wrong.GetNRuns()} time.") # The answer will be 1 time.
481~~~
482
483An incorrect way - the dataset is processed three times.
484~~~{.py}
485df_incorrect = ROOT.RDataFrame(treename, filename);
486
487h_a = df_incorrect.Histo1D("a")
488h_a_val = h_a.GetValue()
489
490h_b = df_incorrect.Histo1D("b")
491h_b_val = h_b.GetValue()
492
493h_c = df_incorrect.Histo1D("c")
494h_c_val = h_c.GetValue()
495
496print(f"How many times was the data set processed? {df_wrong.GetNRuns()} times.") # The answer will be 3 times.
497~~~
498
499It is therefore good practice to declare all your transformations and actions *before* accessing their results, allowing
500RDataFrame to run the loop once and produce all results in one go.
501
502### Going parallel
503Let's say we would like to run the previous examples in parallel on several cores, dividing events fairly between cores.
504The only modification required to the snippets would be the addition of this line *before* constructing the main
505dataframe object:
506~~~{.cpp}
507ROOT::EnableImplicitMT();
508~~~
509Simple as that. More details are given [below](#parallel-execution).
510
511\anchor working_with_collections
512## Working with collections and object selections
513
514RDataFrame reads collections as the special type [ROOT::RVec](https://root.cern/doc/master/classROOT_1_1VecOps_1_1RVec.html): for example, a column containing an array of floating point numbers can be read as a ROOT::RVecF. C-style arrays (with variable or static size), STL vectors and most other collection types can be read this way.
515
516RVec is a container similar to std::vector (and can be used just like a std::vector) but it also offers a rich interface to operate on the array elements in a vectorised fashion, similarly to Python's NumPy arrays.
517
518For example, to fill a histogram with the "pt" of selected particles for each event, Define() can be used to create a column that contains the desired array elements as follows:
519
520~~~{.cpp}
521// h is filled with all the elements of `good_pts`, for each event
522auto h = df.Define("good_pts", [](const ROOT::RVecF &pt) { return pt[pt > 0]; })
523 .Histo1D("good_pts");
524~~~
525
526And in Python:
527
528~~~{.py}
529h = df.Define("good_pts", "pt[pt > 0]").Histo1D("good_pts")
530~~~
531
532Learn more at ROOT::VecOps::RVec.
533
534\anchor transformations
535## Transformations: manipulating data
536\anchor Filters
537### Filters
538A filter is created through a call to `Filter(f, columnList)` or `Filter(filterString)`. In the first overload, `f` can
539be a function, a lambda expression, a functor class, or any other callable object. It must return a `bool` signalling
540whether the event has passed the selection (`true`) or not (`false`). It should perform "read-only" operations on the
541columns, and should not have side-effects (e.g. modification of an external or static variable) to ensure correctness
542when implicit multi-threading is active. The second overload takes a string with a valid C++ expression in which column
543names are used as variable names (e.g. `Filter("x[0] + x[1] > 0")`). This is a convenience feature that comes with a
544certain runtime overhead: C++ code has to be generated on the fly from this expression before using it in the event
545loop. See the paragraph about "Just-in-time compilation" below for more information.
546
547RDataFrame only evaluates filters when necessary: if multiple filters are chained one after another, they are executed
548in order and the first one returning `false` causes the event to be discarded and triggers the processing of the next
549entry. If multiple actions or transformations depend on the same filter, that filter is not executed multiple times for
550each entry: after the first access it simply serves a cached result.
551
552\anchor named-filters-and-cutflow-reports
553#### Named filters and cutflow reports
554An optional string parameter `name` can be passed to the Filter() method to create a **named filter**. Named filters
555work as usual, but also keep track of how many entries they accept and reject.
556
557Statistics are retrieved through a call to the Report() method:
558
559- when Report() is called on the main RDataFrame object, it returns a ROOT::RDF::RResultPtr<RCutFlowReport> relative to all
560named filters declared up to that point
561- when called on a specific node (e.g. the result of a Define() or Filter()), it returns a ROOT::RDF::RResultPtr<RCutFlowReport>
562relative all named filters in the section of the chain between the main RDataFrame and that node (included).
563
564Stats are stored in the same order as named filters have been added to the graph, and *refer to the latest event-loop*
565that has been run using the relevant RDataFrame.
566
567\anchor ranges
568### Ranges
569When RDataFrame is not being used in a multi-thread environment (i.e. no call to EnableImplicitMT() was made),
570Range() transformations are available. These act very much like filters but instead of basing their decision on
571a filter expression, they rely on `begin`,`end` and `stride` parameters.
572
573- `begin`: initial entry number considered for this range.
574- `end`: final entry number (excluded) considered for this range. 0 means that the range goes until the end of the dataset.
575- `stride`: process one entry of the [begin, end) range every `stride` entries. Must be strictly greater than 0.
576
577The actual number of entries processed downstream of a Range() node will be `(end - begin)/stride` (or less if less
578entries than that are available).
579
580Note that ranges act "locally", not based on the global entry count: `Range(10,50)` means "skip the first 10 entries
581*that reach this node*, let the next 40 entries pass, then stop processing". If a range node hangs from a filter node,
582and the range has a `begin` parameter of 10, that means the range will skip the first 10 entries *that pass the
583preceding filter*.
584
585Ranges allow "early quitting": if all branches of execution of a functional graph reached their `end` value of
586processed entries, the event-loop is immediately interrupted. This is useful for debugging and quick data explorations.
587
588\anchor custom-columns
589### Custom columns
590Custom columns are created by invoking `Define(name, f, columnList)`. As usual, `f` can be any callable object
591(function, lambda expression, functor class...); it takes the values of the columns listed in `columnList` (a list of
592strings) as parameters, in the same order as they are listed in `columnList`. `f` must return the value that will be
593assigned to the temporary column.
594
595A new variable is created called `name`, accessible as if it was contained in the dataset from subsequent
596transformations/actions.
597
598Use cases include:
599- caching the results of complex calculations for easy and efficient multiple access
600- extraction of quantities of interest from complex objects
601- branch aliasing, i.e. changing the name of a branch
602
603An exception is thrown if the `name` of the new column/branch is already in use for another branch in the TTree.
604
605It is also possible to specify the quantity to be stored in the new temporary column as a C++ expression with the method
606`Define(name, expression)`. For example this invocation
607
608~~~{.cpp}
609df.Define("pt", "sqrt(px*px + py*py)");
610~~~
611
612will create a new column called "pt" the value of which is calculated starting from the columns px and py. The system
613builds a just-in-time compiled function starting from the expression after having deduced the list of necessary branches
614from the names of the variables specified by the user.
615
616#### Custom columns as function of slot and entry number
617
618It is possible to create custom columns also as a function of the processing slot and entry numbers. The methods that can
619be invoked are:
620- `DefineSlot(name, f, columnList)`. In this case the callable f has this signature `R(unsigned int, T1, T2, ...)`: the
621first parameter is the slot number which ranges from 0 to ROOT::GetThreadPoolSize() - 1.
622- `DefineSlotEntry(name, f, columnList)`. In this case the callable f has this signature `R(unsigned int, ULong64_t,
623T1, T2, ...)`: the first parameter is the slot number while the second one the number of the entry being processed.
624
625\anchor actions
626## Actions: getting results
627### Instant and lazy actions
628Actions can be **instant** or **lazy**. Instant actions are executed as soon as they are called, while lazy actions are
629executed whenever the object they return is accessed for the first time. As a rule of thumb, actions with a return value
630are lazy, the others are instant.
631
632### Return type of a lazy action
633
634When a lazy action is called, it returns a \link ROOT::RDF::RResultPtr ROOT::RDF::RResultPtr<T>\endlink, where T is the
635type of the result of the action. The final result will be stored in the `RResultPtr` and can be retrieved by
636dereferencing it or via its `GetValue` method.
637
638### Actions that return collections
639
640If the type of the return value of an action is a collection, e.g. `std::vector<int>`, you can iterate its elements
641directly through the wrapping `RResultPtr`:
642
643~~~{.cpp}
644ROOT::RDataFrame df{5};
645auto df1 = df.Define("x", []{ return 42; });
646for (const auto &el: df1.Take<int>("x")){
647 std::cout << "Element: " << el << "\n";
648}
649~~~
650
651~~~{.py}
652df = ROOT.RDataFrame(5).Define("x", "42")
653for el in df.Take[int]("x"):
654 print(f"Element: {el}")
655~~~
656
657### Actions and readers
658
659An action that needs values for its computations will request it from a reader, e.g. a column created via `Define` or
660available from the input dataset. The action will request values from each column of the list of input columns (either
661inferred or specified by the user), in order. For example:
662
663~~~{.cpp}
664ROOT::RDataFrame df{1};
665auto df1 = df.Define("x", []{ return 11; });
666auto df2 = df1.Define("y", []{ return 22; });
667auto graph = df2.Graph<int, int>("x","y");
668~~~
669
670The `Graph` action is going to request first the value from column "x", then that of column "y". Specifically, the order
671of execution of the operations of nodes in this branch of the computation graph is guaranteed to be top to bottom.
672
673\anchor rdf_distrdf
674## Distributed execution
675
676RDataFrame applications can be executed in parallel through distributed computing frameworks on a set of remote machines
677thanks to the Python package `ROOT.RDF.Experimental.Distributed`. This experimental, **Python-only** package allows to scale the
678optimized performance RDataFrame can achieve on a single machine to multiple nodes at the same time. It is designed so
679that different backends can be easily plugged in, currently supporting [Apache Spark](http://spark.apache.org/) and
680[Dask](https://dask.org/). To make use of distributed RDataFrame, you only need to switch `ROOT.RDataFrame` with
681the backend-specific `RDataFrame` of your choice, for example:
682
683~~~{.py}
684import ROOT
685from distributed import Client
686# It still accepts the same constructor arguments as traditional RDataFrame
687# but needs a client object which allows connecting to one of the supported
688# schedulers (read more info below)
689client = Client(...)
690df = ROOT.RDataFrame("mytree", "myfile.root", executor=client)
691
692# Continue the application with the traditional RDataFrame API
693sum = df.Filter("x > 10").Sum("y")
694h = df.Histo1D(("name", "title", 10, 0, 10), "x")
695
696print(sum.GetValue())
697h.Draw()
698~~~
699
700The main goal of this package is to support running any RDataFrame application distributedly. Nonetheless, not all
701parts of the RDataFrame API currently work with this package. The subset that is currently available is:
702- Alias
703- AsNumpy
704- Count
705- DefaultValueFor
706- Define
707- DefinePerSample
708- Filter
709- FilterAvailable
710- FilterMissing
711- Graph
712- Histo[1,2,3]D
713- HistoND
714- Max
715- Mean
716- Min
717- Profile[1,2,3]D
718- Redefine
719- Snapshot
720- Stats
721- StdDev
722- Sum
723- Systematic variations: Vary and [VariationsFor](\ref ROOT::RDF::Experimental::VariationsFor).
724- Parallel submission of distributed graphs: [RunGraphs](\ref ROOT::RDF::RunGraphs).
725- Information about the dataframe: GetColumnNames.
726
727with support for more operations coming in the future. Currently, to the supported data sources belong TTree, TChain, RNTuple and RDatasetSpec.
728
729\note The distributed RDataFrame module requires at least Python version 3.8.
730
731### Connecting to a Spark cluster
732
733In order to distribute the RDataFrame workload, you can connect to a Spark cluster you have access to through the
734official [Spark API](https://spark.apache.org/docs/latest/rdd-programming-guide.html#initializing-spark), then hook the
735connection instance to the distributed `RDataFrame` object like so:
736
737~~~{.py}
738import pyspark
739import ROOT
740
741# Create a SparkContext object with the right configuration for your Spark cluster
742conf = SparkConf().setAppName(appName).setMaster(master)
743sc = SparkContext(conf=conf)
744
745# The Spark RDataFrame constructor accepts an optional "sparkcontext" parameter
746# and it will distribute the application to the connected cluster
747df = RDataFrame("mytree", "myfile.root", executor = sc)
748~~~
749
750Note that with the usage above the case of `executor = None` is not supported. One
751can explicitly create a `ROOT.RDF.Experimental.Distributed.Spark.RDataFrame` object
752in order to get a default instance of
753[SparkContext](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.SparkContext.html)
754in case it is not already provided as argument.
755
756### Connecting to a Dask cluster
757
758Similarly, you can connect to a Dask cluster by creating your own connection object which internally operates with one
759of the cluster schedulers supported by Dask (more information in the
760[Dask distributed docs](http://distributed.dask.org/en/stable/)):
761
762~~~{.py}
763import ROOT
764from dask.distributed import Client
765
766# In a Python script the Dask client needs to be initalized in a context
767# Jupyter notebooks / Python session don't need this
768if __name__ == "__main__":
769 # With an already setup cluster that exposes a Dask scheduler endpoint
770 client = Client("dask_scheduler.domain.com:8786")
771
772 # The Dask RDataFrame constructor accepts the Dask Client object as an optional argument
773 df = RDataFrame("mytree","myfile.root", executor=client)
774 # Proceed as usual
775 df.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
776~~~
777
778Note that with the usage above the case of `executor = None` is not supported. One
779can explicitly create a `ROOT.RDF.Experimental.Distributed.Dask.RDataFrame` object
780in order to get a default instance of
781[distributed.Client](http://distributed.dask.org/en/stable/api.html#distributed.Client)
782in case it is not already provided as argument. This will run multiple processes
783on the local machine using all available cores.
784
785### Choosing the number of distributed tasks
786
787A distributed RDataFrame has internal logic to define in how many chunks the input dataset will be split before sending
788tasks to the distributed backend. Each task reads and processes one of said chunks. The logic is backend-dependent, but
789generically tries to infer how many cores are available in the cluster through the connection object. The number of
790tasks will be equal to the inferred number of cores. There are cases where the connection object of the chosen backend
791doesn't have information about the actual resources of the cluster. An example of this is when using Dask to connect to
792a batch system. The client object created at the beginning of the application does not automatically know how many cores
793will be available during distributed execution, since the jobs are submitted to the batch system after the creation of
794the connection. In such cases, the logic is to default to process the whole dataset in 2 tasks.
795
796The number of tasks submitted for distributed execution can be also set programmatically, by providing the optional
797keyword argument `npartitions` when creating the RDataFrame object. This parameter is accepted irrespectively of the
798backend used:
799
800~~~{.py}
801import ROOT
802
803if __name__ == "__main__":
804 # The `npartitions` optional argument tells the RDataFrame how many tasks are desired
805 df = ROOT.RDataFrame("mytree", "myfile.root", executor=SupportedExecutor(...), npartitions=NPARTITIONS)
806 # Proceed as usual
807 df.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
808~~~
809
810Note that when processing a TTree or TChain dataset, the `npartitions` value should not exceed the number of clusters in
811the dataset. The number of clusters in a TTree can be retrieved by typing `rootls -lt myfile.root` at a command line.
812
813### Distributed Snapshot
814
815The Snapshot operation behaves slightly differently when executed distributedly. First off, it requires the path
816supplied to the Snapshot call to be accessible from any worker of the cluster and from the client machine (in general
817it should be provided as an absolute path). Another important difference is that `n` separate files will be produced,
818where `n` is the number of dataset partitions. As with local RDataFrame, the result of a Snapshot on a distributed
819RDataFrame is another distributed RDataFrame on which we can define a new computation graph and run more distributed
820computations.
821
822### Distributed RunGraphs
823
824Submitting multiple distributed RDataFrame executions is supported through the RunGraphs function. Similarly to its
825local counterpart, the function expects an iterable of objects representing an RDataFrame action. Each action will be
826triggered concurrently to send multiple computation graphs to a distributed cluster at the same time:
827
828~~~{.py}
829import ROOT
830
831# Create 3 different dataframes and book an histogram on each one
832histoproxies = [
833 ROOT.RDataFrame(100, executor=SupportedExecutor(...))
834 .Define("x", "rdfentry_")
835 .Histo1D(("name", "title", 10, 0, 100), "x")
836 for _ in range(4)
837]
838
839# Execute the 3 computation graphs
840ROOT.RDF.RunGraphs(histoproxies)
841# Retrieve all the histograms in one go
842histos = [histoproxy.GetValue() for histoproxy in histoproxies]
843~~~
844
845Every distributed backend supports this feature and graphs belonging to different backends can be still triggered with
846a single call to RunGraphs (e.g. it is possible to send a Spark job and a Dask job at the same time).
847
848### Histogram models in distributed mode
849
850When calling a Histo*D operation in distributed mode, remember to pass to the function the model of the histogram to be
851computed, e.g. the axis range and the number of bins:
852
853~~~{.py}
854import ROOT
855
856if __name__ == "__main__":
857 df = ROOT.RDataFrame("mytree","myfile.root",executor=SupportedExecutor(...)).Define("x","someoperation")
858 # The model can be passed either as a tuple with the arguments in the correct order
859 df.Histo1D(("name", "title", 10, 0, 10), "x")
860 # Or by creating the specific struct
861 model = ROOT.RDF.TH1DModel("name", "title", 10, 0, 10)
862 df.Histo1D(model, "x")
863~~~
864
865Without this, two partial histograms resulting from two distributed tasks would have incompatible binning, thus leading
866to errors when merging them. Failing to pass a histogram model will raise an error on the client side, before starting
867the distributed execution.
868
869### Live visualization in distributed mode with dask
870
871The live visualization feature allows real-time data representation of plots generated during the execution
872of a distributed RDataFrame application.
873It enables visualizing intermediate results as they are computed across multiple nodes of a Dask cluster
874by creating a canvas and continuously updating it as partial results become available.
875
876The LiveVisualize() function can be imported from the Python package **ROOT.RDF.Experimental.Distributed**:
877
878~~~{.py}
879import ROOT
880
881LiveVisualize = ROOT.RDF.Experimental.Distributed.LiveVisualize
882~~~
883
884The function takes drawable objects (e.g. histograms) and optional callback functions as argument, it accepts 4 different input formats:
885
886- Passing a list or tuple of drawables:
887You can pass a list or tuple containing the plots you want to visualize. For example:
888
889~~~{.py}
890LiveVisualize([h_gaus, h_exp, h_random])
891~~~
892
893- Passing a list or tuple of drawables with a global callback function:
894You can also include a global callback function that will be applied to all plots. For example:
895
896~~~{.py}
897def set_fill_color(hist):
898 hist.SetFillColor("kBlue")
899
900LiveVisualize([h_gaus, h_exp, h_random], set_fill_color)
901~~~
902
903- Passing a Dictionary of drawables and callback functions:
904For more control, you can create a dictionary where keys are plots and values are corresponding (optional) callback functions. For example:
905
906~~~{.py}
907plot_callback_dict = {
908 graph: set_marker,
909 h_exp: fit_exp,
910 tprofile_2d: None
911}
912
913LiveVisualize(plot_callback_dict)
914~~~
915
916- Passing a Dictionary of drawables and callback functions with a global callback function:
917You can also combine a dictionary of plots and callbacks with a global callback function:
918
919~~~{.py}
920LiveVisualize(plot_callback_dict, write_to_tfile)
921~~~
922
923\note The allowed operations to pass to LiveVisualize are:
924 - Histo1D(), Histo2D(), Histo3D()
925 - Graph()
926 - Profile1D(), Profile2D()
927
928\warning The Live Visualization feature is only supported for the Dask backend.
929
930### Injecting C++ code and using external files into distributed RDF script
931
932Distributed RDF provides an interface for the users who want to inject the C++ code (via header files, shared libraries or declare the code directly)
933into their distributed RDF application, or their application needs to use information from external files which should be distributed
934to the workers (for example, a JSON or a txt file with necessary parameters information).
935
936The examples below show the usage of these interface functions: firstly, how this is done in a local Python
937RDF application and secondly, how it is done distributedly.
938
939#### Include and distribute header files.
940
941~~~{.py}
942# Local RDataFrame script
943ROOT.gInterpreter.AddIncludePath("myheader.hxx")
944df.Define(...)
945
946# Distributed RDF script
947ROOT.RDF.Experimental.Distributed.DistributeHeaders("myheader.hxx")
948df.Define(...)
949~~~
950
951#### Load and distribute shared libraries
952
953~~~{.py}
954# Local RDataFrame script
955ROOT.gSystem.Load("my_library.so")
956df.Define(...)
957
958# Distributed RDF script
959ROOT.RDF.Experimental.Distributed.DistributeSharedLibs("my_library.so")
960df.Define(...)
961~~~
962
963#### Declare and distribute the cpp code
964
965The cpp code is always available to all dataframes.
966
967~~~{.py}
968# Local RDataFrame script
969ROOT.gInterpreter.Declare("my_code")
970df.Define(...)
971
972# Distributed RDF script
973ROOT.RDF.Experimental.Distributed.DistributeCppCode("my_code")
974df.Define(...)
975~~~
976
977#### Distribute additional files (other than headers or shared libraries).
978
979~~~{.py}
980# Local RDataFrame script is not applicable here as local RDF application can simply access the external files it needs.
981
982# Distributed RDF script
983ROOT.RDF.Experimental.Distributed.DistributeFiles("my_file")
984df.Define(...)
985~~~
986
987\anchor parallel-execution
988## Performance tips and parallel execution
989As pointed out before in this document, RDataFrame can transparently perform multi-threaded event loops to speed up
990the execution of its actions. Users have to call ROOT::EnableImplicitMT() *before* constructing the RDataFrame
991object to indicate that it should take advantage of a pool of worker threads. **Each worker thread processes a distinct
992subset of entries**, and their partial results are merged before returning the final values to the user.
993There are no guarantees on the order in which threads will process the batches of entries.
994In particular, note that this means that, for multi-thread event loops, there is no
995guarantee on the order in which Snapshot() will _write_ entries: they could be scrambled with respect to the input dataset. The values of the special `rdfentry_` column will also not correspond to the entry numbers in the input dataset (e.g. TChain) in multi-thread runs.
996
997\warning By default, RDataFrame will use as many threads as the hardware supports, using up **all** the resources on
998a machine. This might be undesirable on shared computing resources such as a batch cluster. Therefore, when running on shared computing resources, use
999~~~{.cpp}
1000ROOT::EnableImplicitMT(i)
1001~~~
1002replacing `i` with the number of CPUs/slots that were allocated for this job.
1003
1004### Thread-safety of user-defined expressions
1005RDataFrame operations such as Histo1D() or Snapshot() are guaranteed to work correctly in multi-thread event loops.
1006User-defined expressions, such as strings or lambdas passed to Filter(), Define(), Foreach(), Reduce() or Aggregate()
1007will have to be thread-safe, i.e. it should be possible to call them concurrently from different threads.
1008
1009Note that simple Filter() and Define() transformations will inherently satisfy this requirement: Filter() / Define()
1010expressions will often be *pure* in the functional programming sense (no side-effects, no dependency on external state),
1011which eliminates all risks of race conditions.
1012
1013In order to facilitate writing of thread-safe operations, some RDataFrame features such as Foreach(), Define() or \link ROOT::RDF::RResultPtr::OnPartialResult OnPartialResult()\endlink
1014offer thread-aware counterparts (ForeachSlot(), DefineSlot(), \link ROOT::RDF::RResultPtr::OnPartialResultSlot OnPartialResultSlot()\endlink): their only difference is that they
1015will pass an extra `slot` argument (an unsigned integer) to the user-defined expression. When calling user-defined code
1016concurrently, RDataFrame guarantees that different threads will employ different values of the `slot` parameter,
1017where `slot` will be a number between 0 and `GetNSlots() - 1`.
1018In other words, within a slot, computation runs sequentially and events are processed sequentially.
1019Note that the same slot might be associated to different threads over the course of a single event loop, but two threads
1020will never receive the same slot at the same time.
1021This extra parameter might facilitate writing safe parallel code by having each thread write/modify a different
1022processing slot, e.g. a different element of a list. See [here](#generic-actions) for an example usage of ForeachSlot().
1023
1024### Parallel execution of multiple RDataFrame event loops
1025A complex analysis may require multiple separate RDataFrame computation graphs to produce all desired results. This poses the challenge that the
1026event loops of each computation graph can be parallelized, but the different loops run sequentially, one after the other.
1027On many-core architectures it might be desirable to run different event loops concurrently to improve resource usage.
1028ROOT::RDF::RunGraphs() allows running multiple RDataFrame event loops concurrently:
1029~~~{.cpp}
1030ROOT::EnableImplicitMT();
1031ROOT::RDataFrame df1("tree1", "f1.root");
1032ROOT::RDataFrame df2("tree2", "f2.root");
1033auto histo1 = df1.Histo1D("x");
1034auto histo2 = df2.Histo1D("y");
1035
1036// just accessing result pointers, the event loops of separate RDataFrames run one after the other
1037histo1->Draw(); // runs first multi-thread event loop
1038histo2->Draw(); // runs second multi-thread event loop
1039
1040// alternatively, with ROOT::RDF::RunGraphs, event loops for separate computation graphs can run concurrently
1041ROOT::RDF::RunGraphs({histo1, histo2});
1042histo1->Draw(); // results can then be used as usual
1043~~~
1044
1045### Performance considerations
1046
1047To obtain the maximum performance out of RDataFrame, make sure to avoid just-in-time compiled versions of transformations and actions if at all possible.
1048For instance, `Filter("x > 0")` requires just-in-time compilation of the corresponding C++ logic, while the equivalent `Filter([](float x) { return x > 0.; }, {"x"})` does not.
1049Similarly, `Histo1D("x")` requires just-in-time compilation after the type of `x` is retrieved from the dataset, while `Histo1D<float>("x")` does not; the latter spelling
1050should be preferred for performance-critical applications.
1051
1052Python applications cannot easily specify template parameters or pass C++ callables to RDataFrame.
1053See [Python interface](classROOT_1_1RDataFrame.html#python) for possible ways to speed up hot paths in this case.
1054
1055Just-in-time compilation happens once, right before starting an event loop. To reduce the runtime cost of this step, make sure to book all operations *for all RDataFrame computation graphs*
1056before the first event loop is triggered: just-in-time compilation will happen once for all code required to be generated up to that point, also across different computation graphs.
1057
1058Also make sure not to count the just-in-time compilation time (which happens once before the event loop and does not depend on the size of the dataset) as part of the event loop runtime (which scales with the size of the dataset). RDataFrame has an experimental logging feature that simplifies measuring the time spent in just-in-time compilation and in the event loop (as well as providing some more interesting information). See [Activating RDataFrame execution logs](\ref rdf-logging).
1059
1060### Memory usage
1061
1062There are two reasons why RDataFrame may consume more memory than expected. Firstly, each result is duplicated for each worker thread, which e.g. in case of many (possibly multi-dimensional) histograms with fine binning can result in visible memory consumption during the event loop. The thread-local copies of the results are destroyed when the final result is produced. Reducing the number of threads or using coarser binning will reduce the memory usage.
1063
1064Secondly, just-in-time compilation of string expressions or non-templated actions (see the previous paragraph) causes Cling, ROOT's C++ interpreter, to allocate some memory for the generated code that is only released at the end of the application. This commonly results in memory usage creep in long-running applications that create many RDataFrames one after the other. Possible mitigations include creating and running each RDataFrame event loop in a sub-process, or booking all operations for all different RDataFrame computation graphs before the first event loop is triggered, so that the interpreter is invoked only once for all computation graphs:
1065
1066~~~{.cpp}
1067// assuming df1 and df2 are separate computation graphs, do:
1068auto h1 = df1.Histo1D("x");
1069auto h2 = df2.Histo1D("y");
1070h1->Draw(); // we just-in-time compile everything needed by df1 and df2 here
1071h2->Draw("SAME");
1072
1073// do not:
1074auto h1 = df1.Histo1D("x");
1075h1->Draw(); // we just-in-time compile here
1076auto h2 = df2.Histo1D("y");
1077h2->Draw("SAME"); // we just-in-time compile again here, as the second Histo1D call is new
1078~~~
1079
1080\anchor more-features
1081## More features
1082Here is a list of the most important features that have been omitted in the "Crash course" for brevity.
1083You don't need to read all these to start using RDataFrame, but they are useful to save typing time and runtime.
1084
1085\anchor systematics
1086### Systematic variations
1087
1088Starting from ROOT v6.26, RDataFrame provides a flexible syntax to define systematic variations.
1089This is done in two steps: a) register variations for one or more existing columns using Vary() and b) extract variations
1090of normal RDataFrame results using \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()". In between these steps, no other change
1091to the analysis code is required: the presence of systematic variations for certain columns is automatically propagated
1092through filters, defines and actions, and RDataFrame will take these dependencies into account when producing varied
1093results. \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" is included in header `ROOT/RDFHelpers.hxx`. The compiled C++ programs must include this header
1094explicitly, this is not required for ROOT macros.
1095
1096An example usage of Vary() and \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" in C++:
1097
1098~~~{.cpp}
1099auto nominal_hx =
1100 df.Vary("pt", "ROOT::RVecD{pt*0.9f, pt*1.1f}", {"down", "up"})
1101 .Filter("pt > pt_cut")
1102 .Define("x", someFunc, {"pt"})
1103 .Histo1D<float>("x");
1104
1105// request the generation of varied results from the nominal_hx
1106ROOT::RDF::Experimental::RResultMap<TH1D> hx = ROOT::RDF::Experimental::VariationsFor(nominal_hx);
1107
1108// the event loop runs here, upon first access to any of the results or varied results:
1109hx["nominal"].Draw(); // same effect as nominal_hx->Draw()
1110hx["pt:down"].Draw("SAME");
1111hx["pt:up"].Draw("SAME");
1112~~~
1113
1114A list of variation "tags" is passed as the last argument to Vary(). The tags give names to the varied values that are returned
1115as elements of an RVec of the appropriate C++ type. The number of variation tags must correspond to the number of elements of
1116this RVec (2 in the example above: the first element will correspond to the tag "down", the second
1117to the tag "up"). The _full_ variation name will be composed of the varied column name and the variation tags (e.g.
1118"pt:down", "pt:up" in this example). Python usage looks similar.
1119
1120Note how we use the "pt" column as usual in the Filter() and Define() calls and we simply use "x" as the value to fill
1121the resulting histogram. To produce the varied results, RDataFrame will automatically execute the Filter and Define
1122calls for each variation and fill the histogram with values and cuts that depend on the variation.
1123
1124There is no limitation to the complexity of a Vary() expression. Just like for the Define() and Filter() calls, users are
1125not limited to string expressions but they can also pass any valid C++ callable, including lambda functions and
1126complex functors. The callable can be applied to zero or more existing columns and it will always receive their
1127_nominal_ value in input.
1128
1129#### Varying multiple columns in lockstep
1130
1131In the following Python snippet we use the Vary() signature that allows varying multiple columns simultaneously or
1132"in lockstep":
1133
1134~~~{.python}
1135df.Vary(["pt", "eta"],
1136 "RVec<RVecF>{{pt*0.9, pt*1.1}, {eta*0.9, eta*1.1}}",
1137 variationTags=["down", "up"],
1138 variationName="ptAndEta")
1139~~~
1140
1141The expression returns an RVec of two RVecs: each inner vector contains the varied values for one column. The
1142inner vectors follow the same ordering as the column names that are passed as the first argument. Besides the variation tags, in
1143this case we also have to explicitly pass the variation name (here: "ptAndEta") as the default column name does not exist.
1144
1145The above call will produce variations "ptAndEta:down" and "ptAndEta:up".
1146
1147#### Combining multiple variations
1148
1149Even if a result depends on multiple variations, only one variation is applied at a time, i.e. there will be no result produced
1150by applying multiple systematic variations at the same time.
1151For example, in the following example snippet, the RResultMap instance `all_h` will contain keys "nominal", "pt:down",
1152"pt:up", "eta:0", "eta:1", but no "pt:up&&eta:0" or similar:
1153
1154~~~{.cpp}
1155auto df = _df.Vary("pt",
1156 "ROOT::RVecD{pt*0.9, pt*1.1}",
1157 {"down", "up"})
1158 .Vary("eta",
1159 [](float eta) { return RVecF{eta*0.9f, eta*1.1f}; },
1160 {"eta"},
1161 2);
1162
1163auto nom_h = df.Histo2D(histoModel, "pt", "eta");
1164auto all_hs = VariationsFor(nom_h);
1165all_hs.GetKeys(); // returns {"nominal", "pt:down", "pt:up", "eta:0", "eta:1"}
1166~~~
1167
1168Note how we passed the integer `2` instead of a list of variation tags to the second Vary() invocation: this is a
1169shorthand that automatically generates tags 0 to N-1 (in this case 0 and 1).
1170
1171\note Currently, VariationsFor() and RResultMap are in the `ROOT::RDF::Experimental` namespace, to indicate that these
1172 interfaces might still evolve and improve based on user feedback. We expect that some aspects of the related
1173 programming model will be streamlined in future versions.
1174
1175\note Currently, the results of a Snapshot() or Display() call cannot be varied (i.e. it is not possible to
1176 call \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" on them. These limitations will be lifted in future releases.
1177
1178See the Vary() method for more information and [this tutorial](https://root.cern/doc/master/df106__HiggsToFourLeptons_8C.html)
1179for an example usage of Vary and \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" in the analysis.
1180
1181\anchor rnode
1182### RDataFrame objects as function arguments and return values
1183RDataFrame variables/nodes are relatively cheap to copy and it's possible to both pass them to (or move them into)
1184functions and to return them from functions. However, in general each dataframe node will have a different C++ type,
1185which includes all available compile-time information about what that node does. One way to cope with this complication
1186is to use template functions and/or C++14 auto return types:
1187~~~{.cpp}
1188template <typename RDF>
1189auto ApplySomeFilters(RDF df)
1190{
1191 return df.Filter("x > 0").Filter([](int y) { return y < 0; }, {"y"});
1192}
1193~~~
1194
1195A possibly simpler, C++11-compatible alternative is to take advantage of the fact that any dataframe node can be
1196converted (implicitly or via an explicit cast) to the common type ROOT::RDF::RNode:
1197~~~{.cpp}
1198// a function that conditionally adds a Range to an RDataFrame node.
1199RNode MaybeAddRange(RNode df, bool mustAddRange)
1200{
1201 return mustAddRange ? df.Range(1) : df;
1202}
1203// use as :
1204ROOT::RDataFrame df(10);
1205auto maybeRangedDF = MaybeAddRange(df, true);
1206~~~
1207
1208The conversion to ROOT::RDF::RNode is cheap, but it will introduce an extra virtual call during the RDataFrame event
1209loop (in most cases, the resulting performance impact should be negligible). Python users can perform the conversion with the helper function `ROOT.RDF.AsRNode`.
1210
1211\anchor RDFCollections
1212### Storing RDataFrame objects in collections
1213
1214ROOT::RDF::RNode also makes it simple to store RDataFrame nodes in collections, e.g. a `std::vector<RNode>` or a `std::map<std::string, RNode>`:
1215
1216~~~{.cpp}
1217std::vector<ROOT::RDF::RNode> dfs;
1218dfs.emplace_back(ROOT::RDataFrame(10));
1219dfs.emplace_back(dfs[0].Define("x", "42.f"));
1220~~~
1221
1222\anchor callbacks
1223### Executing callbacks every N events
1224It's possible to schedule execution of arbitrary functions (callbacks) during the event loop.
1225Callbacks can be used e.g. to inspect partial results of the analysis while the event loop is running,
1226drawing a partially-filled histogram every time a certain number of new entries is processed, or
1227displaying a progress bar while the event loop runs.
1228
1229For example one can draw an up-to-date version of a result histogram every 100 entries like this:
1230~~~{.cpp}
1231auto h = df.Histo1D("x");
1232TCanvas c("c","x hist");
1233h.OnPartialResult(100, [&c](TH1D &h_) { c.cd(); h_.Draw(); c.Update(); });
1234// event loop runs here, this final `Draw` is executed after the event loop is finished
1235h->Draw();
1236~~~
1237
1238Callbacks are registered to a ROOT::RDF::RResultPtr and must be callables that takes a reference to the result type as argument
1239and return nothing. RDataFrame will invoke registered callbacks passing partial action results as arguments to them
1240(e.g. a histogram filled with a part of the selected events).
1241
1242Read more on ROOT::RDF::RResultPtr::OnPartialResult() and ROOT::RDF::RResultPtr::OnPartialResultSlot().
1243
1244\anchor default-branches
1245### Default column lists
1246When constructing an RDataFrame object, it is possible to specify a **default column list** for your analysis, in the
1247usual form of a list of strings representing branch/column names. The default column list will be used as a fallback
1248whenever a list specific to the transformation/action is not present. RDataFrame will take as many of these columns as
1249needed, ignoring trailing extra names if present.
1250~~~{.cpp}
1251// use "b1" and "b2" as default columns
1252RDataFrame d1("myTree", "file.root", {"b1","b2"});
1253auto h = d1.Filter([](int b1, int b2) { return b1 > b2; }) // will act on "b1" and "b2"
1254 .Histo1D(); // will act on "b1"
1255
1256// just one default column this time
1257RDataFrame d2("myTree", "file.root", {"b1"});
1258auto d2f = d2.Filter([](double b2) { return b2 > 0; }, {"b2"}) // we can still specify non-default column lists
1259auto min = d2f.Min(); // returns the minimum value of "b1" for the filtered entries
1260auto vals = d2f.Take<double>(); // return the values for all entries passing the selection as a vector
1261~~~
1262
1263\anchor helper-cols
1264### Special helper columns: rdfentry_ and rdfslot_
1265Every instance of RDataFrame is created with two special columns called `rdfentry_` and `rdfslot_`. The `rdfentry_`
1266column is of type `ULong64_t` and it holds the current entry number while `rdfslot_` is an `unsigned int`
1267holding the index of the current data processing slot.
1268For backwards compatibility reasons, the names `tdfentry_` and `tdfslot_` are also accepted.
1269These columns are ignored by operations such as [Cache](classROOT_1_1RDF_1_1RInterface.html#aaaa0a7bb8eb21315d8daa08c3e25f6c9)
1270or [Snapshot](classROOT_1_1RDF_1_1RInterface.html#a233b7723e498967f4340705d2c4db7f8).
1271
1272\warning Note that in multi-thread event loops the values of `rdfentry_` _do not_ correspond to what would be the entry numbers
1273of a TChain constructed over the same set of ROOT files, as the entries are processed in an unspecified order.
1274
1275\anchor jitting
1276### Just-in-time compilation: column type inference and explicit declaration of column types
1277C++ is a statically typed language: all types must be known at compile-time. This includes the types of the TTree
1278branches we want to work on. For filters, defined columns and some of the actions, **column types are deduced from the
1279signature** of the relevant filter function/temporary column expression/action function:
1280~~~{.cpp}
1281// here b1 is deduced to be `int` and b2 to be `double`
1282df.Filter([](int x, double y) { return x > 0 && y < 0.; }, {"b1", "b2"});
1283~~~
1284If we specify an incorrect type for one of the columns, an exception with an informative message will be thrown at
1285runtime, when the column value is actually read from the dataset: RDataFrame detects type mismatches. The same would
1286happen if we swapped the order of "b1" and "b2" in the column list passed to Filter().
1287
1288Certain actions, on the other hand, do not take a function as argument (e.g. Histo1D()), so we cannot deduce the type of
1289the column at compile-time. In this case **RDataFrame infers the type of the column** from the TTree itself. This
1290is why we never needed to specify the column types for all actions in the above snippets.
1291
1292When the column type is not a common one such as `int`, `double`, `char` or `float` it is nonetheless good practice to
1293specify it as a template parameter to the action itself, like this:
1294~~~{.cpp}
1295df.Histo1D("b1"); // OK, the type of "b1" is deduced at runtime
1296df.Min<MyNumber_t>("myObject"); // OK, "myObject" is deduced to be of type `MyNumber_t`
1297~~~
1298
1299Deducing types at runtime requires the just-in-time compilation of the relevant actions, which has a small runtime
1300overhead, so specifying the type of the columns as template parameters to the action is good practice when performance is a goal.
1301
1302When strings are passed as expressions to Filter() or Define(), fundamental types are passed as constants. This avoids certaincommon mistakes such as typing `x = 0` rather than `x == 0`:
1303
1304~~~{.cpp}
1305// this throws an error (note the typo)
1306df.Define("x", "0").Filter("x = 0");
1307~~~
1308
1309\anchor generic-actions
1310### User-defined custom actions
1311RDataFrame strives to offer a comprehensive set of standard actions that can be performed on each event. At the same
1312time, it allows users to inject their own action code to perform arbitrarily complex data reductions.
1313
1314#### Implementing custom actions with Book()
1315
1316Through the Book() method, users can implement a custom action and have access to the same features
1317that built-in RDataFrame actions have, e.g. hooks to events related to the start, end and execution of the
1318event loop, or the possibility to return a lazy RResultPtr to an arbitrary type of result:
1319
1320~~~{.cpp}
1321#include <ROOT/RDataFrame.hxx>
1322#include <memory>
1323
1324class MyCounter : public ROOT::Detail::RDF::RActionImpl<MyCounter> {
1325 std::shared_ptr<int> fFinalResult = std::make_shared<int>(0);
1326 std::vector<int> fPerThreadResults;
1327
1328public:
1329 // We use a public type alias to advertise the type of the result of this action
1330 using Result_t = int;
1331
1332 MyCounter(unsigned int nSlots) : fPerThreadResults(nSlots) {}
1333
1334 // Called before the event loop to retrieve the address of the result that will be filled/generated.
1335 std::shared_ptr<int> GetResultPtr() const { return fFinalResult; }
1336
1337 // Called at the beginning of the event loop.
1338 void Initialize() {}
1339
1340 // Called at the beginning of each processing task.
1341 void InitTask(TTreeReader *, int) {}
1342
1343 /// Called at every entry.
1344 void Exec(unsigned int slot)
1345 {
1346 fPerThreadResults[slot]++;
1347 }
1348
1349 // Called at the end of the event loop.
1350 void Finalize()
1351 {
1352 *fFinalResult = std::accumulate(fPerThreadResults.begin(), fPerThreadResults.end(), 0);
1353 }
1354
1355 // Called by RDataFrame to retrieve the name of this action.
1356 std::string GetActionName() const { return "MyCounter"; }
1357};
1358
1359int main() {
1360 ROOT::RDataFrame df(10);
1361 ROOT::RDF::RResultPtr<int> resultPtr = df.Book<>(MyCounter{df.GetNSlots()}, {});
1362 // The GetValue call triggers the event loop
1363 std::cout << "Number of processed entries: " << resultPtr.GetValue() << std::endl;
1364}
1365~~~
1366
1367See the Book() method for more information and [this tutorial](https://root.cern/doc/master/df018__customActions_8C.html)
1368for a more complete example.
1369
1370#### Injecting arbitrary code in the event loop with Foreach() and ForeachSlot()
1371
1372Foreach() takes a callable (lambda expression, free function, functor...) and a list of columns and
1373executes the callable on the values of those columns for each event that passes all upstream selections.
1374It can be used to perform actions that are not already available in the interface. For example, the following snippet
1375evaluates the root mean square of column "x":
1376~~~{.cpp}
1377// Single-thread evaluation of RMS of column "x" using Foreach
1378double sumSq = 0.;
1379unsigned int n = 0;
1380df.Foreach([&sumSq, &n](double x) { ++n; sumSq += x*x; }, {"x"});
1381std::cout << "rms of x: " << std::sqrt(sumSq / n) << std::endl;
1382~~~
1383In multi-thread runs, users are responsible for the thread-safety of the expression passed to Foreach():
1384thread will execute the expression concurrently.
1385The code above would need to employ some resource protection mechanism to ensure non-concurrent writing of `rms`; but
1386this is probably too much head-scratch for such a simple operation.
1387
1388ForeachSlot() can help in this situation. It is an alternative version of Foreach() for which the function takes an
1389additional "processing slot" parameter besides the columns it should be applied to. RDataFrame
1390guarantees that ForeachSlot() will invoke the user expression with different `slot` parameters for different concurrent
1391executions (see [Special helper columns: rdfentry_ and rdfslot_](\ref helper-cols) for more information on the slot parameter).
1392We can take advantage of ForeachSlot() to evaluate a thread-safe root mean square of column "x":
1393~~~{.cpp}
1394// Thread-safe evaluation of RMS of column "x" using ForeachSlot
1395ROOT::EnableImplicitMT();
1396const unsigned int nSlots = df.GetNSlots();
1397std::vector<double> sumSqs(nSlots, 0.);
1398std::vector<unsigned int> ns(nSlots, 0);
1399
1400df.ForeachSlot([&sumSqs, &ns](unsigned int slot, double x) { sumSqs[slot] += x*x; ns[slot] += 1; }, {"x"});
1401double sumSq = std::accumulate(sumSqs.begin(), sumSqs.end(), 0.); // sum all squares
1402unsigned int n = std::accumulate(ns.begin(), ns.end(), 0); // sum all counts
1403std::cout << "rms of x: " << std::sqrt(sumSq / n) << std::endl;
1404~~~
1405Notice how we created one `double` variable for each processing slot and later merged their results via `std::accumulate`.
1406
1407
1408\anchor friends
1409### Dataset joins with friend trees
1410
1411Vertically concatenating multiple trees that have the same columns (creating a logical dataset with the same columns and
1412more rows) is trivial in RDataFrame: just pass the tree name and a list of file names to RDataFrame's constructor, or create a TChain
1413out of the desired trees and pass that to RDataFrame.
1414
1415Horizontal concatenations of trees or chains (creating a logical dataset with the same number of rows and the union of the
1416columns of multiple trees) leverages TTree's "friend" mechanism.
1417
1418Simple joins of trees that do not have the same number of rows are also possible with indexed friend trees (see below).
1419
1420To use friend trees in RDataFrame, set up trees with the appropriate relationships and then instantiate an RDataFrame
1421with the main tree:
1422
1423~~~{.cpp}
1424TTree main([...]);
1425TTree friend([...]);
1426main.AddFriend(&friend, "myFriend");
1427
1428RDataFrame df(main);
1429auto df2 = df.Filter("myFriend.MyCol == 42");
1430~~~
1431
1432The same applies for TChains. Columns coming from the friend trees can be referred to by their full name, like in the example above,
1433or the friend tree name can be omitted in case the column name is not ambiguous (e.g. "MyCol" could be used instead of
1434"myFriend.MyCol" in the example above if there is no column "MyCol" in the main tree).
1435
1436\note A common source of confusion is that trees that are written out from a multi-thread Snapshot() call will have their
1437 entries (block-wise) shuffled with respect to the original tree. Such trees cannot be used as friends of the original
1438 one: rows will be mismatched.
1439
1440Indexed friend trees provide a way to perform simple joins of multiple trees over a common column.
1441When a certain entry in the main tree (or chain) is loaded, the friend trees (or chains) will then load an entry where the
1442"index" columns have a value identical to the one in the main one. For example, in Python:
1443
1444~~~{.py}
1445main_tree = ...
1446aux_tree = ...
1447
1448# If a friend tree has an index on `commonColumn`, when the main tree loads
1449# a given row, it also loads the row of the friend tree that has the same
1450# value of `commonColumn`
1451aux_tree.BuildIndex("commonColumn")
1452
1453mainTree.AddFriend(aux_tree)
1454
1455df = ROOT.RDataFrame(mainTree)
1456~~~
1457
1458RDataFrame supports indexed friend TTrees from ROOT v6.24 in single-thread mode and from v6.28/02 in multi-thread mode.
1459
1460\anchor other-file-formats
1461### Reading data formats other than ROOT trees
1462RDataFrame can be interfaced with RDataSources. The ROOT::RDF::RDataSource interface defines an API that RDataFrame can use to read arbitrary columnar data formats.
1463
1464RDataFrame calls into concrete RDataSource implementations to retrieve information about the data, retrieve (thread-local) readers or "cursors" for selected columns
1465and to advance the readers to the desired data entry.
1466Some predefined RDataSources are natively provided by ROOT such as the ROOT::RDF::RCsvDS which allows to read comma separated files:
1467~~~{.cpp}
1468auto tdf = ROOT::RDF::FromCSV("MuRun2010B.csv");
1469auto filteredEvents =
1470 tdf.Filter("Q1 * Q2 == -1")
1471 .Define("m", "sqrt(pow(E1 + E2, 2) - (pow(px1 + px2, 2) + pow(py1 + py2, 2) + pow(pz1 + pz2, 2)))");
1472auto h = filteredEvents.Histo1D("m");
1473h->Draw();
1474~~~
1475
1476See also FromNumpy (Python-only), FromRNTuple(), FromArrow(), FromSqlite().
1477
1478\anchor callgraphs
1479### Computation graphs (storing and reusing sets of transformations)
1480
1481As we saw, transformed dataframes can be stored as variables and reused multiple times to create modified versions of the dataset. This implicitly defines a **computation graph** in which
1482several paths of filtering/creation of columns are executed simultaneously, and finally aggregated results are produced.
1483
1484RDataFrame detects when several actions use the same filter or the same defined column, and **only evaluates each
1485filter or defined column once per event**, regardless of how many times that result is used down the computation graph.
1486Objects read from each column are **built once and never copied**, for maximum efficiency.
1487When "upstream" filters are not passed, subsequent filters, temporary column expressions and actions are not evaluated,
1488so it might be advisable to put the strictest filters first in the graph.
1489
1490\anchor representgraph
1491### Visualizing the computation graph
1492It is possible to print the computation graph from any node to obtain a [DOT (graphviz)](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) representation either on the standard output
1493or in a file.
1494
1495Invoking the function ROOT::RDF::SaveGraph() on any node that is not the head node, the computation graph of the branch
1496the node belongs to is printed. By using the head node, the entire computation graph is printed.
1497
1498Following there is an example of usage:
1499~~~{.cpp}
1500// First, a sample computational graph is built
1501ROOT::RDataFrame df("tree", "f.root");
1502
1503auto df2 = df.Define("x", []() { return 1; })
1504 .Filter("col0 % 1 == col0")
1505 .Filter([](int b1) { return b1 <2; }, {"cut1"})
1506 .Define("y", []() { return 1; });
1507
1508auto count = df2.Count();
1509
1510// Prints the graph to the rd1.dot file in the current directory
1511ROOT::RDF::SaveGraph(df, "./mydot.dot");
1512// Prints the graph to standard output
1513ROOT::RDF::SaveGraph(df);
1514~~~
1515
1516The generated graph can be rendered using one of the graphviz filters, e.g. `dot`. For instance, the image below can be generated with the following command:
1517~~~{.sh}
1518$ dot -Tpng computation_graph.dot -ocomputation_graph.png
1519~~~
1520
1521\image html RDF_Graph2.png
1522
1523\anchor rdf-logging
1524### Activating RDataFrame execution logs
1525
1526RDataFrame has experimental support for verbose logging of the event loop runtimes and other interesting related information. It is activated as follows:
1527~~~{.cpp}
1528#include <ROOT/RLogger.hxx>
1529
1530// this increases RDF's verbosity level as long as the `verbosity` variable is in scope
1531auto verbosity = ROOT::Experimental::RLogScopedVerbosity(ROOT::Detail::RDF::RDFLogChannel(), ROOT::Experimental::ELogLevel::kInfo);
1532~~~
1533
1534or in Python:
1535~~~{.python}
1536import ROOT
1537
1538verbosity = ROOT.Experimental.RLogScopedVerbosity(ROOT.Detail.RDF.RDFLogChannel(), ROOT.Experimental.ELogLevel.kInfo)
1539~~~
1540
1541More information (e.g. start and end of each multi-thread task) is printed using `ELogLevel.kDebug` and even more
1542(e.g. a full dump of the generated code that RDataFrame just-in-time-compiles) using `ELogLevel.kDebug+10`.
1543
1544\anchor rdf-from-spec
1545### Creating an RDataFrame from a dataset specification file
1546
1547RDataFrame can be created using a dataset specification JSON file:
1548
1549~~~{.python}
1550import ROOT
1551
1552df = ROOT.RDF.Experimental.FromSpec("spec.json")
1553~~~
1554
1555The input dataset specification JSON file needs to be provided by the user and it describes all necessary samples and
1556their associated metadata information. The main required key is the "samples" (at least one sample is needed) and the
1557required sub-keys for each sample are "trees" and "files". Additionally, one can specify a metadata dictionary for each
1558sample in the "metadata" key.
1559
1560A simple example for the formatting of the specification in the JSON file is the following:
1561
1562~~~{.cpp}
1563{
1564 "samples": {
1565 "sampleA": {
1566 "trees": ["tree1", "tree2"],
1567 "files": ["file1.root", "file2.root"],
1568 "metadata": {
1569 "lumi": 10000.0,
1570 "xsec": 1.0,
1571 "sample_category" = "data"
1572 }
1573 },
1574 "sampleB": {
1575 "trees": ["tree3", "tree4"],
1576 "files": ["file3.root", "file4.root"],
1577 "metadata": {
1578 "lumi": 0.5,
1579 "xsec": 1.5,
1580 "sample_category" = "MC_background"
1581 }
1582 }
1583 }
1584}
1585~~~
1586
1587The metadata information from the specification file can be then accessed using the DefinePerSample function.
1588For example, to access luminosity information (stored as a double):
1589
1590~~~{.python}
1591df.DefinePerSample("lumi", 'rdfsampleinfo_.GetD("lumi")')
1592~~~
1593
1594or sample_category information (stored as a string):
1595
1596~~~{.python}
1597df.DefinePerSample("sample_category", 'rdfsampleinfo_.GetS("sample_category")')
1598~~~
1599
1600or directly the filename:
1601
1602~~~{.python}
1603df.DefinePerSample("name", "rdfsampleinfo_.GetSampleName()")
1604~~~
1605
1606An example implementation of the "FromSpec" method is available in tutorial: df106_HiggstoFourLeptons.py, which also
1607provides a corresponding exemplary JSON file for the dataset specification.
1608
1609\anchor progressbar
1610### Adding a progress bar
1611
1612A progress bar showing the processed event statistics can be added to any RDataFrame program.
1613The event statistics include elapsed time, currently processed file, currently processed events, the rate of event processing
1614and an estimated remaining time (per file being processed). It is recorded and printed in the terminal every m events and every
1615n seconds (by default m = 1000 and n = 1). The ProgressBar can be also added when the multithread (MT) mode is enabled.
1616
1617ProgressBar is added after creating the dataframe object (df):
1618~~~{.cpp}
1619ROOT::RDataFrame df("tree", "file.root");
1620ROOT::RDF::Experimental::AddProgressBar(df);
1621~~~
1622
1623Alternatively, RDataFrame can be cast to an RNode first, giving the user more flexibility
1624For example, it can be called at any computational node, such as Filter or Define, not only the head node,
1625with no change to the ProgressBar function itself (please see the [Python interface](classROOT_1_1RDataFrame.html#python)
1626section for appropriate usage in Python):
1627~~~{.cpp}
1628ROOT::RDataFrame df("tree", "file.root");
1629auto df_1 = ROOT::RDF::RNode(df.Filter("x>1"));
1630ROOT::RDF::Experimental::AddProgressBar(df_1);
1631~~~
1632Examples of implemented progress bars can be seen by running [Higgs to Four Lepton tutorial](https://root.cern/doc/master/df106__HiggsToFourLeptons_8py_source.html) and [Dimuon tutorial](https://root.cern/doc/master/df102__NanoAODDimuonAnalysis_8C.html).
1633
1634\anchor missing-values
1635### Working with missing values in the dataset
1636
1637In certain situations a dataset might be missing one or more values at one or
1638more of its entries. For example:
1639
1640- If the dataset is composed of multiple files and one or more files is
1641 missing one or more columns required by the analysis.
1642- When joining different datasets horizontally according to some index value
1643 (e.g. the event number), if the index does not find a match in one or more
1644 other datasets for a certain entry.
1645
1646For example, suppose that column "y" does not have a value for entry 42:
1647
1648\code{.unparsed}
1649+-------+---+---+
1650| Entry | x | y |
1651+-------+---+---+
1652| 42 | 1 | |
1653+-------+---+---+
1654\endcode
1655
1656If the RDataFrame application reads that column, for example if a Take() action
1657was requested, the default behaviour is to throw an exception indicating
1658that that column is missing an entry.
1659
1660The following paragraphs discuss the functionalities provided by RDataFrame to
1661work with missing values in the dataset.
1662
1663#### FilterAvailable and FilterMissing
1664
1665FilterAvailable and FilterMissing are specialized RDataFrame Filter operations.
1666They take as input argument the name of a column of the dataset to watch for
1667missing values. Like Filter, they will either keep or discard an entire entry
1668based on whether a condition returns true or false. Specifically:
1669
1670- FilterAvailable: the condition is whether the value of the column is present.
1671 If so, the entry is kept. Otherwise if the value is missing the entry is
1672 discarded.
1673- FilterMissing: the condition is whether the value of the column is missing. If
1674 so, the entry is kept. Otherwise if the value is present the entry is
1675 discarded.
1676
1677\code{.py}
1678df = ROOT.RDataFrame(dataset)
1679
1680# Anytime an entry from "col" is missing, the entire entry will be filtered out
1681df_available = df.FilterAvailable("col")
1682df_available = df_available.Define("twice", "col * 2")
1683
1684# Conversely, if we want to select the entries for which the column has missing
1685# values, we do the following
1686df_missingcol = df.FilterMissing("col")
1687# Following operations in the same branch of the computation graph clearly
1688# cannot access that same column, since there would be no value to read
1689df_missingcol = df_missingcol.Define("observable", "othercolumn * 2")
1690\endcode
1691
1692\code{.cpp}
1693ROOT::RDataFrame df{dataset};
1694
1695// Anytime an entry from "col" is missing, the entire entry will be filtered out
1696auto df_available = df.FilterAvailable("col");
1697auto df_twicecol = df_available.Define("twice", "col * 2");
1698
1699// Conversely, if we want to select the entries for which the column has missing
1700// values, we do the following
1701auto df_missingcol = df.FilterMissing("col");
1702// Following operations in the same branch of the computation graph clearly
1703// cannot access that same column, since there would be no value to read
1704auto df_observable = df_missingcol.Define("observable", "othercolumn * 2");
1705\endcode
1706
1707#### DefaultValueFor
1708
1709DefaultValueFor creates a node of the computation graph which just forwards the
1710values of the columns necessary for other downstream nodes, when they are
1711available. In case a value of the input column passed to this function is not
1712available, the node will provide the default value passed to this function call
1713instead. Example:
1714
1715\code{.py}
1716df = ROOT.RDataFrame(dataset)
1717# Anytime an entry from "col" is missing, the value will be the default one
1718default_value = ... # Some sensible default value here
1719df = df.DefaultValueFor("col", default_value)
1720df = df.Define("twice", "col * 2")
1721\endcode
1722
1723\code{.cpp}
1724ROOT::RDataFrame df{dataset};
1725// Anytime an entry from "col" is missing, the value will be the default one
1726constexpr auto default_value = ... // Some sensible default value here
1727auto df_default = df.DefaultValueFor("col", default_value);
1728auto df_col = df_default.Define("twice", "col * 2");
1729\endcode
1730
1731#### Mixing different strategies to work with missing values in the same RDataFrame
1732
1733All the operations presented above only act on the particular branch of the
1734computation graph where they are called, so that different results can be
1735obtained by mixing and matching the filtering or providing a default value
1736strategies:
1737
1738\code{.py}
1739df = ROOT.RDataFrame(dataset)
1740# Anytime an entry from "col" is missing, the value will be the default one
1741default_value = ... # Some sensible default value here
1742df_default = df.DefaultValueFor("col", default_value).Define("twice", "col * 2")
1743df_filtered = df.FilterAvailable("col").Define("twice", "col * 2")
1744
1745# Same number of total entries as the input dataset, with defaulted values
1746df_default.Display(["twice"]).Print()
1747# Only keep the entries where "col" has values
1748df_filtered.Display(["twice"]).Print()
1749\endcode
1750
1751\code{.cpp}
1752ROOT::RDataFrame df{dataset};
1753
1754// Anytime an entry from "col" is missing, the value will be the default one
1755constexpr auto default_value = ... // Some sensible default value here
1756auto df_default = df.DefaultValueFor("col", default_value).Define("twice", "col * 2");
1757auto df_filtered = df.FilterAvailable("col").Define("twice", "col * 2");
1758
1759// Same number of total entries as the input dataset, with defaulted values
1760df_default.Display({"twice"})->Print();
1761// Only keep the entries where "col" has values
1762df_filtered.Display({"twice"})->Print();
1763\endcode
1764
1765#### Further considerations
1766
1767Note that working with missing values is currently supported with a TTree-based
1768data source. Support of this functionality for other data sources may come in
1769the future.
1770
1771*/
1772// clang-format on
1773
1774namespace ROOT {
1775
1777using ColumnNamesPtr_t = std::shared_ptr<const ColumnNames_t>;
1778
1779////////////////////////////////////////////////////////////////////////////
1780/// \brief Build the dataframe.
1781/// \param[in] treeName Name of the tree contained in the directory
1782/// \param[in] dirPtr TDirectory where the tree is stored, e.g. a TFile.
1783/// \param[in] defaultColumns Collection of default columns.
1784///
1785/// The default columns are looked at in case no column is specified in the
1786/// booking of actions or transformations.
1787/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
1788RDataFrame::RDataFrame(std::string_view treeName, TDirectory *dirPtr, const ColumnNames_t &defaultColumns)
1789 : RInterface(std::make_shared<RDFDetail::RLoopManager>(nullptr, defaultColumns))
1790{
1791 if (!dirPtr) {
1792 auto msg = "Invalid TDirectory!";
1793 throw std::runtime_error(msg);
1794 }
1795 const std::string treeNameInt(treeName);
1796 auto tree = static_cast<TTree *>(dirPtr->Get(treeNameInt.c_str()));
1797 if (!tree) {
1798 auto msg = "Tree \"" + treeNameInt + "\" cannot be found!";
1799 throw std::runtime_error(msg);
1800 }
1801 GetProxiedPtr()->SetTree(std::shared_ptr<TTree>(tree, [](TTree *) {}));
1802}
1803
1804////////////////////////////////////////////////////////////////////////////
1805/// \brief Build the dataframe.
1806/// \param[in] treeName Name of the tree contained in the directory
1807/// \param[in] filenameglob TDirectory where the tree is stored, e.g. a TFile.
1808/// \param[in] defaultColumns Collection of default columns.
1809///
1810/// The filename glob supports the same type of expressions as TChain::Add(), and it is passed as-is to TChain's
1811/// constructor.
1812///
1813/// The default columns are looked at in case no column is specified in the
1814/// booking of actions or transformations.
1815/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
1816#ifdef R__HAS_ROOT7
1817RDataFrame::RDataFrame(std::string_view treeName, std::string_view fileNameGlob, const ColumnNames_t &defaultColumns)
1818 : RInterface(ROOT::Detail::RDF::CreateLMFromFile(treeName, fileNameGlob, defaultColumns))
1819{
1820}
1821#else
1822RDataFrame::RDataFrame(std::string_view treeName, std::string_view fileNameGlob, const ColumnNames_t &defaultColumns)
1823 : RInterface(ROOT::Detail::RDF::CreateLMFromTTree(treeName, fileNameGlob, defaultColumns))
1824{
1825}
1826#endif
1827
1828////////////////////////////////////////////////////////////////////////////
1829/// \brief Build the dataframe.
1830/// \param[in] treeName Name of the tree contained in the directory
1831/// \param[in] fileglobs Collection of file names of filename globs
1832/// \param[in] defaultColumns Collection of default columns.
1833///
1834/// The filename globs support the same type of expressions as TChain::Add(), and each glob is passed as-is
1835/// to TChain's constructor.
1836///
1837/// The default columns are looked at in case no column is specified in the booking of actions or transformations.
1838/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
1839#ifdef R__HAS_ROOT7
1840RDataFrame::RDataFrame(std::string_view datasetName, const std::vector<std::string> &fileNameGlobs,
1841 const ColumnNames_t &defaultColumns)
1842 : RInterface(ROOT::Detail::RDF::CreateLMFromFile(datasetName, fileNameGlobs, defaultColumns))
1843{
1844}
1845#else
1846RDataFrame::RDataFrame(std::string_view datasetName, const std::vector<std::string> &fileNameGlobs,
1848 : RInterface(ROOT::Detail::RDF::CreateLMFromTTree(datasetName, fileNameGlobs, defaultColumns))
1849{
1850}
1851#endif
1852
1853////////////////////////////////////////////////////////////////////////////
1854/// \brief Build the dataframe.
1855/// \param[in] tree The tree or chain to be studied.
1856/// \param[in] defaultColumns Collection of default column names to fall back to when none is specified.
1857///
1858/// The default columns are looked at in case no column is specified in the
1859/// booking of actions or transformations.
1860/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
1865
1866//////////////////////////////////////////////////////////////////////////
1867/// \brief Build a dataframe that generates numEntries entries.
1868/// \param[in] numEntries The number of entries to generate.
1869///
1870/// An empty-source dataframe constructed with a number of entries will
1871/// generate those entries on the fly when some action is triggered,
1872/// and it will do so for all the previously-defined columns.
1873/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
1875 : RInterface(std::make_shared<RDFDetail::RLoopManager>(numEntries))
1876
1877{
1878}
1879
1880//////////////////////////////////////////////////////////////////////////
1881/// \brief Build dataframe associated to data source.
1882/// \param[in] ds The data source object.
1883/// \param[in] defaultColumns Collection of default column names to fall back to when none is specified.
1884///
1885/// A dataframe associated to a data source will query it to access column values.
1886/// \note see ROOT::RDF::RInterface for the documentation of the methods available.
1887RDataFrame::RDataFrame(std::unique_ptr<ROOT::RDF::RDataSource> ds, const ColumnNames_t &defaultColumns)
1888 : RInterface(std::make_shared<RDFDetail::RLoopManager>(std::move(ds), defaultColumns))
1889{
1890}
1891
1892//////////////////////////////////////////////////////////////////////////
1893/// \brief Build dataframe from an RDatasetSpec object.
1894/// \param[in] spec The dataset specification object.
1895///
1896/// A dataset specification includes trees and file names,
1897/// as well as an optional friend list and/or entry range.
1898///
1899/// ### Example usage from Python:
1900/// ~~~{.py}
1901/// spec = (
1902/// ROOT.RDF.Experimental.RDatasetSpec()
1903/// .AddSample(("data", "tree", "file.root"))
1904/// .WithGlobalFriends("friendTree", "friend.root", "alias")
1905/// .WithGlobalRange((100, 200))
1906/// )
1907/// df = ROOT.RDataFrame(spec)
1908/// ~~~
1909///
1910/// See also ROOT::RDataFrame::FromSpec().
1915
1917{
1918 // If any node of the computation graph associated with this RDataFrame
1919 // declared code to jit, we need to make sure the compilation actually
1920 // happens. For example, a jitted Define could have been booked but
1921 // if the computation graph is not actually run then the code of the
1922 // Define node is not jitted. This in turn would cause memory leaks.
1923 // See https://github.com/root-project/root/issues/15399
1924 fLoopManager->Jit();
1925}
1926
1927namespace RDF {
1928namespace Experimental {
1929
1930////////////////////////////////////////////////////////////////////////////
1931/// \brief Create the RDataFrame from the dataset specification file.
1932/// \param[in] jsonFile Path to the dataset specification JSON file.
1933///
1934/// The input dataset specification JSON file must include a number of keys that
1935/// describe all the necessary samples and their associated metadata information.
1936/// The main key, "samples", is required and at least one sample is needed. Each
1937/// sample must have at least one key "trees" and at least one key "files" from
1938/// which the data is read. Optionally, one or more metadata information can be
1939/// added, as well as the friend list information.
1940///
1941/// ### Example specification file JSON:
1942/// The following is an example of the dataset specification JSON file formatting:
1943///~~~{.cpp}
1944/// {
1945/// "samples": {
1946/// "sampleA": {
1947/// "trees": ["tree1", "tree2"],
1948/// "files": ["file1.root", "file2.root"],
1949/// "metadata": {"lumi": 1.0, }
1950/// },
1951/// "sampleB": {
1952/// "trees": ["tree3", "tree4"],
1953/// "files": ["file3.root", "file4.root"],
1954/// "metadata": {"lumi": 0.5, }
1955/// },
1956/// ...
1957/// },
1958/// }
1959///~~~
1961{
1962 const nlohmann::ordered_json fullData = nlohmann::ordered_json::parse(std::ifstream(jsonFile));
1963 if (!fullData.contains("samples") || fullData["samples"].empty()) {
1964 throw std::runtime_error(
1965 R"(The input specification does not contain any samples. Please provide the samples in the specification like:
1966{
1967 "samples": {
1968 "sampleA": {
1969 "trees": ["tree1", "tree2"],
1970 "files": ["file1.root", "file2.root"],
1971 "metadata": {"lumi": 1.0, }
1972 },
1973 "sampleB": {
1974 "trees": ["tree3", "tree4"],
1975 "files": ["file3.root", "file4.root"],
1976 "metadata": {"lumi": 0.5, }
1977 },
1978 ...
1979 },
1980})");
1981 }
1982
1984 for (const auto &keyValue : fullData["samples"].items()) {
1985 const std::string &sampleName = keyValue.key();
1986 const auto &sample = keyValue.value();
1987 // TODO: if requested in https://github.com/root-project/root/issues/11624
1988 // allow union-like types for trees and files, see: https://github.com/nlohmann/json/discussions/3815
1989 if (!sample.contains("trees")) {
1990 throw std::runtime_error("A list of tree names must be provided for sample " + sampleName + ".");
1991 }
1992 std::vector<std::string> trees = sample["trees"];
1993 if (!sample.contains("files")) {
1994 throw std::runtime_error("A list of files must be provided for sample " + sampleName + ".");
1995 }
1996 std::vector<std::string> files = sample["files"];
1997 if (!sample.contains("metadata")) {
1998 spec.AddSample(RSample{sampleName, trees, files});
1999 } else {
2000 RMetaData m;
2001 for (const auto &metadata : sample["metadata"].items()) {
2002 const auto &val = metadata.value();
2003 if (val.is_string())
2004 m.Add(metadata.key(), val.get<std::string>());
2005 else if (val.is_number_integer())
2006 m.Add(metadata.key(), val.get<int>());
2007 else if (val.is_number_float())
2008 m.Add(metadata.key(), val.get<double>());
2009 else
2010 throw std::logic_error("The metadata keys can only be of type [string|int|double].");
2011 }
2012 spec.AddSample(RSample{sampleName, trees, files, m});
2013 }
2014 }
2015 if (fullData.contains("friends")) {
2016 for (const auto &friends : fullData["friends"].items()) {
2017 std::string alias = friends.key();
2018 std::vector<std::string> trees = friends.value()["trees"];
2019 std::vector<std::string> files = friends.value()["files"];
2020 if (files.size() != trees.size() && trees.size() > 1)
2021 throw std::runtime_error("Mismatch between trees and files in a friend.");
2022 spec.WithGlobalFriends(trees, files, alias);
2023 }
2024 }
2025
2026 if (fullData.contains("range")) {
2027 std::vector<int> range = fullData["range"];
2028
2029 if (range.size() == 1)
2030 spec.WithGlobalRange({range[0]});
2031 else if (range.size() == 2)
2032 spec.WithGlobalRange({range[0], range[1]});
2033 }
2034 return ROOT::RDataFrame(spec);
2035}
2036
2037} // namespace Experimental
2038} // namespace RDF
2039
2040} // namespace ROOT
2041
2042namespace cling {
2043//////////////////////////////////////////////////////////////////////////
2044/// Print an RDataFrame at the prompt
2045std::string printValue(ROOT::RDataFrame *df)
2046{
2047 // The loop manager is never null, except when its construction failed.
2048 // This can happen e.g. if the constructor of RLoopManager that expects
2049 // a file name is used and that file doesn't exist. This point is usually
2050 // not even reached in that situation, since the exception thrown by the
2051 // constructor will also stop execution of the program. But it can still
2052 // be reached at the prompt, if the user tries to print the RDataFrame
2053 // variable after an incomplete initialization.
2054 auto *lm = df->GetLoopManager();
2055 if (!lm) {
2056 throw std::runtime_error("Cannot print information about this RDataFrame, "
2057 "it was not properly created. It must be discarded.");
2058 }
2060 auto defCols = lm->GetDefaultColumnNames();
2061
2062 std::ostringstream ret;
2063 if (tree) {
2065 if (!defCols.empty()) {
2066 if (defCols.size() == 1)
2067 ret << "\nDefault column: " << defCols[0];
2068 else {
2069 ret << "\nDefault columns:\n";
2070 for (auto &&col : defCols) {
2071 ret << " - " << col << "\n";
2072 }
2073 }
2074 }
2075 } else if (auto ds = df->fDataSource) {
2076 ret << "A data frame associated to the data source \"" << cling::printValue(ds) << "\"";
2077 } else {
2078 ret << "An empty data frame that will create " << lm->GetNEmptyEntries() << " entries\n";
2079 }
2080
2081 return ret.str();
2082}
2083} // namespace cling
unsigned long long ULong64_t
Definition RtypesCore.h:70
ROOT::Detail::TRangeCast< T, true > TRangeDynCast
TRangeDynCast is an adapter class that allows the typed iteration through a TCollection.
The head node of a RDF computation graph.
The dataset specification for RDataFrame.
Class behaving as a heterogenuous dictionary to store the metadata of a dataset.
Definition RMetaData.hxx:50
Class representing a sample which is a grouping of trees and their fileglobs, and,...
Definition RSample.hxx:39
std::shared_ptr< ROOT::Detail::RDF::RLoopManager > fLoopManager
< The RLoopManager at the root of this computation graph. Never null.
RDataSource * fDataSource
Non-owning pointer to a data-source object. Null if no data-source. RLoopManager has ownership of the...
RDFDetail::RLoopManager * GetLoopManager() const
ROOT's RDataFrame offers a modern, high-level interface for analysis of data stored in TTree ,...
RDataFrame(std::string_view treeName, std::string_view filenameglob, const ColumnNames_t &defaultColumns={})
Build the dataframe.
ROOT::RDF::ColumnNames_t ColumnNames_t
Describe directory structure in memory.
Definition TDirectory.h:45
A TTree represents a columnar dataset.
Definition TTree.h:79
ROOT::RDataFrame FromSpec(const std::string &jsonFile)
Factory method to create an RDataFrame from a JSON specification file.
std::vector< std::string > ColumnNames_t
tbb::task_arena is an alias of tbb::interface7::task_arena, which doesn't allow to forward declare tb...
std::shared_ptr< const ColumnNames_t > ColumnNamesPtr_t
TMarker m
Definition textangle.C:8