This tutorial illustrates how you can conveniently apply BDTs in C++ using the fast tree inference engine offered by TMVA.
Supported workflows are event-by-event inference, batch inference and pipelines with RDataFrame.
void tmva103_Application()
{
RBDT<> bdt(
"myBDT",
"http://root.cern/files/tmva101.root");
auto y1 = bdt.Compute({1.0, 2.0, 3.0, 4.0});
std::cout << "Apply model on a single input vector: " << y1[0] << std::endl;
float data[8] = {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0};
auto y2 = bdt.Compute(
x);
std::cout << "Apply model on an input tensor: " << y2 << std::endl;
ROOT::RDataFrame df(
"Events",
"root://eospublic.cern.ch//eos/root-eos/cms_opendata_2012_nanoaod/SMHiggsToZZTo4L.root");
auto df2 = df.Filter("nMuon >= 2")
.Filter("nElectron >= 2")
.Define("Muon_pt_1", "Muon_pt[0]")
.Define("Muon_pt_2", "Muon_pt[1]")
.Define("Electron_pt_1", "Electron_pt[0]")
.Define("Electron_pt_2", "Electron_pt[1]")
.Define("y",
Compute<4, float>(bdt),
{"Muon_pt_1", "Muon_pt_2", "Electron_pt_1", "Electron_pt_2"});
std::cout << "Mean response on the signal sample: " << *df2.Mean("y") << std::endl;
}
ROOT's RDataFrame offers a high level interface for analyses of data stored in TTree,...
Fast boosted decision tree inference.
RTensor is a container with contiguous memory and shape information.
Apply model on a single input vector: 0.0302787
Apply model on an input tensor: { { 0.0302787 } { 0.19114 } }
Mean response on the signal sample: 0.625916
- Date
- December 2018
- Author
- Stefan Wunsch
Definition in file tmva103_Application.C.