Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
MethodBDT.cxx
Go to the documentation of this file.
1// Author: Andreas Hoecker, Joerg Stelzer, Helge Voss, Kai Voss, Eckhard v. Toerne, Jan Therhaag
2
3/**********************************************************************************
4 * Project: TMVA - a Root-integrated toolkit for multivariate data analysis *
5 * Package: TMVA *
6 * Class : MethodBDT (BDT = Boosted Decision Trees) *
7 * *
8 * *
9 * Description: *
10 * Analysis of Boosted Decision Trees *
11 * *
12 * Authors (alphabetical): *
13 * Andreas Hoecker <Andreas.Hocker@cern.ch> - CERN, Switzerland *
14 * Helge Voss <Helge.Voss@cern.ch> - MPI-K Heidelberg, Germany *
15 * Kai Voss <Kai.Voss@cern.ch> - U. of Victoria, Canada *
16 * Doug Schouten <dschoute@sfu.ca> - Simon Fraser U., Canada *
17 * Jan Therhaag <jan.therhaag@cern.ch> - U. of Bonn, Germany *
18 * Eckhard v. Toerne <evt@uni-bonn.de> - U of Bonn, Germany *
19 * *
20 * Copyright (c) 2005-2011: *
21 * CERN, Switzerland *
22 * U. of Victoria, Canada *
23 * MPI-K Heidelberg, Germany *
24 * U. of Bonn, Germany *
25 * *
26 * Redistribution and use in source and binary forms, with or without *
27 * modification, are permitted according to the terms listed in LICENSE *
28 * (see tmva/doc/LICENSE) *
29 **********************************************************************************/
30
31/*! \class TMVA::MethodBDT
32\ingroup TMVA
33
34Analysis of Boosted Decision Trees
35
36Boosted decision trees have been successfully used in High Energy
37Physics analysis for example by the MiniBooNE experiment
38(Yang-Roe-Zhu, physics/0508045). In Boosted Decision Trees, the
39selection is done on a majority vote on the result of several decision
40trees, which are all derived from the same training sample by
41supplying different event weights during the training.
42
43### Decision trees:
44
45Successive decision nodes are used to categorize the
46events out of the sample as either signal or background. Each node
47uses only a single discriminating variable to decide if the event is
48signal-like ("goes right") or background-like ("goes left"). This
49forms a tree like structure with "baskets" at the end (leave nodes),
50and an event is classified as either signal or background according to
51whether the basket where it ends up has been classified signal or
52background during the training. Training of a decision tree is the
53process to define the "cut criteria" for each node. The training
54starts with the root node. Here one takes the full training event
55sample and selects the variable and corresponding cut value that gives
56the best separation between signal and background at this stage. Using
57this cut criterion, the sample is then divided into two subsamples, a
58signal-like (right) and a background-like (left) sample. Two new nodes
59are then created for each of the two sub-samples and they are
60constructed using the same mechanism as described for the root
61node. The devision is stopped once a certain node has reached either a
62minimum number of events, or a minimum or maximum signal purity. These
63leave nodes are then called "signal" or "background" if they contain
64more signal respective background events from the training sample.
65
66### Boosting:
67
68The idea behind adaptive boosting (AdaBoost) is, that signal events
69from the training sample, that end up in a background node
70(and vice versa) are given a larger weight than events that are in
71the correct leave node. This results in a re-weighed training event
72sample, with which then a new decision tree can be developed.
73The boosting can be applied several times (typically 100-500 times)
74and one ends up with a set of decision trees (a forest).
75Gradient boosting works more like a function expansion approach, where
76each tree corresponds to a summand. The parameters for each summand (tree)
77are determined by the minimization of a error function (binomial log-
78likelihood for classification and Huber loss for regression).
79A greedy algorithm is used, which means, that only one tree is modified
80at a time, while the other trees stay fixed.
81
82### Bagging:
83
84In this particular variant of the Boosted Decision Trees the boosting
85is not done on the basis of previous training results, but by a simple
86stochastic re-sampling of the initial training event sample.
87
88### Random Trees:
89
90Similar to the "Random Forests" from Leo Breiman and Adele Cutler, it
91uses the bagging algorithm together and bases the determination of the
92best node-split during the training on a random subset of variables only
93which is individually chosen for each split.
94
95### Analysis:
96
97Applying an individual decision tree to a test event results in a
98classification of the event as either signal or background. For the
99boosted decision tree selection, an event is successively subjected to
100the whole set of decision trees and depending on how often it is
101classified as signal, a "likelihood" estimator is constructed for the
102event being signal or background. The value of this estimator is the
103one which is then used to select the events from an event sample, and
104the cut value on this estimator defines the efficiency and purity of
105the selection.
106
107*/
108
109
110#include "TMVA/MethodBDT.h"
111#include "TMVA/Config.h"
112
113#include "TMVA/BDTEventWrapper.h"
116#include "TMVA/Configurable.h"
117#include "TMVA/CrossEntropy.h"
118#include "TMVA/DecisionTree.h"
119#include "TMVA/DataSet.h"
120#include "TMVA/GiniIndex.h"
122#include "TMVA/Interval.h"
123#include "TMVA/IMethod.h"
124#include "TMVA/LogInterval.h"
125#include "TMVA/MethodBase.h"
127#include "TMVA/MsgLogger.h"
129#include "TMVA/PDF.h"
130#include "TMVA/Ranking.h"
131#include "TMVA/Results.h"
133#include "TMVA/SdivSqrtSplusB.h"
134#include "TMVA/SeparationBase.h"
135#include "TMVA/Timer.h"
136#include "TMVA/Tools.h"
137#include "TMVA/Types.h"
138
139#include "TRandom3.h"
140#include "TMath.h"
141#include "TMatrixTSym.h"
142#include "TGraph.h"
143
144#include <iostream>
145#include <iomanip>
146#include <algorithm>
147#include <cmath>
148#include <numeric>
149#include <unordered_map>
150
151using std::vector;
152using std::make_pair;
153
155
157
159
160////////////////////////////////////////////////////////////////////////////////
161/// The standard constructor for the "boosted decision trees".
162
164 const TString& methodTitle,
165 DataSetInfo& theData,
166 const TString& theOption ) :
167 TMVA::MethodBase( jobName, Types::kBDT, methodTitle, theData, theOption)
168 , fTrainSample(0)
169 , fNTrees(0)
170 , fSigToBkgFraction(0)
171 , fAdaBoostBeta(0)
172// , fTransitionPoint(0)
173 , fShrinkage(0)
174 , fBaggedBoost(kFALSE)
175 , fBaggedGradBoost(kFALSE)
176// , fSumOfWeights(0)
177 , fMinNodeEvents(0)
178 , fMinNodeSize(5)
179 , fMinNodeSizeS("5%")
180 , fNCuts(0)
181 , fUseFisherCuts(0) // don't use this initialisation, only here to make Coverity happy. Is set in DeclarOptions()
182 , fMinLinCorrForFisher(.8) // don't use this initialisation, only here to make Coverity happy. Is set in DeclarOptions()
183 , fUseExclusiveVars(0) // don't use this initialisation, only here to make Coverity happy. Is set in DeclarOptions()
184 , fUseYesNoLeaf(kFALSE)
185 , fNodePurityLimit(0)
186 , fNNodesMax(0)
187 , fMaxDepth(0)
188 , fPruneMethod(DecisionTree::kNoPruning)
189 , fPruneStrength(0)
190 , fFValidationEvents(0)
191 , fAutomatic(kFALSE)
192 , fRandomisedTrees(kFALSE)
193 , fUseNvars(0)
194 , fUsePoissonNvars(0) // don't use this initialisation, only here to make Coverity happy. Is set in Init()
195 , fUseNTrainEvents(0)
196 , fBaggedSampleFraction(0)
197 , fNoNegWeightsInTraining(kFALSE)
198 , fInverseBoostNegWeights(kFALSE)
199 , fPairNegWeightsGlobal(kFALSE)
200 , fTrainWithNegWeights(kFALSE)
201 , fDoBoostMonitor(kFALSE)
202 , fITree(0)
203 , fBoostWeight(0)
204 , fErrorFraction(0)
205 , fCss(0)
206 , fCts_sb(0)
207 , fCtb_ss(0)
208 , fCbb(0)
209 , fDoPreselection(kFALSE)
210 , fSkipNormalization(kFALSE)
211 , fHistoricBool(kFALSE)
212{
213 fMonitorNtuple = NULL;
214 fSepType = NULL;
216}
217
218////////////////////////////////////////////////////////////////////////////////
219
221 const TString& theWeightFile)
222 : TMVA::MethodBase( Types::kBDT, theData, theWeightFile)
223 , fTrainSample(0)
224 , fNTrees(0)
225 , fSigToBkgFraction(0)
226 , fAdaBoostBeta(0)
227// , fTransitionPoint(0)
228 , fShrinkage(0)
229 , fBaggedBoost(kFALSE)
230 , fBaggedGradBoost(kFALSE)
231// , fSumOfWeights(0)
232 , fMinNodeEvents(0)
233 , fMinNodeSize(5)
234 , fMinNodeSizeS("5%")
235 , fNCuts(0)
236 , fUseFisherCuts(0) // don't use this initialisation, only here to make Coverity happy. Is set in DeclarOptions()
237 , fMinLinCorrForFisher(.8) // don't use this initialisation, only here to make Coverity happy. Is set in DeclarOptions()
238 , fUseExclusiveVars(0) // don't use this initialisation, only here to make Coverity happy. Is set in DeclarOptions()
239 , fUseYesNoLeaf(kFALSE)
240 , fNodePurityLimit(0)
241 , fNNodesMax(0)
242 , fMaxDepth(0)
243 , fPruneMethod(DecisionTree::kNoPruning)
244 , fPruneStrength(0)
245 , fFValidationEvents(0)
246 , fAutomatic(kFALSE)
247 , fRandomisedTrees(kFALSE)
248 , fUseNvars(0)
249 , fUsePoissonNvars(0) // don't use this initialisation, only here to make Coverity happy. Is set in Init()
250 , fUseNTrainEvents(0)
251 , fBaggedSampleFraction(0)
252 , fNoNegWeightsInTraining(kFALSE)
253 , fInverseBoostNegWeights(kFALSE)
254 , fPairNegWeightsGlobal(kFALSE)
255 , fTrainWithNegWeights(kFALSE)
256 , fDoBoostMonitor(kFALSE)
257 , fITree(0)
258 , fBoostWeight(0)
259 , fErrorFraction(0)
260 , fCss(0)
261 , fCts_sb(0)
262 , fCtb_ss(0)
263 , fCbb(0)
264 , fDoPreselection(kFALSE)
265 , fSkipNormalization(kFALSE)
266 , fHistoricBool(kFALSE)
267{
268 fMonitorNtuple = NULL;
269 fSepType = NULL;
271 // constructor for calculating BDT-MVA using previously generated decision trees
272 // the result of the previous training (the decision trees) are read in via the
273 // weight file. Make sure the variables correspond to the ones used in
274 // creating the "weight"-file
275}
276
277////////////////////////////////////////////////////////////////////////////////
278/// BDT can handle classification with multiple classes and regression with one regression-target.
279
281{
282 if (type == Types::kClassification && numberClasses == 2) return kTRUE;
283 if (type == Types::kMulticlass ) return kTRUE;
284 if( type == Types::kRegression && numberTargets == 1 ) return kTRUE;
285 return kFALSE;
286}
287
288////////////////////////////////////////////////////////////////////////////////
289/// Define the options (their key words). That can be set in the option string.
290///
291/// know options:
292///
293/// - nTrees number of trees in the forest to be created
294/// - BoostType the boosting type for the trees in the forest (AdaBoost e.t.c..).
295/// Known:
296/// - AdaBoost
297/// - AdaBoostR2 (Adaboost for regression)
298/// - Bagging
299/// - GradBoost
300/// - AdaBoostBeta the boosting parameter, beta, for AdaBoost
301/// - UseRandomisedTrees choose at each node splitting a random set of variables
302/// - UseNvars use UseNvars variables in randomised trees
303/// - UsePoisson Nvars use UseNvars not as fixed number but as mean of a poisson distribution
304/// - SeparationType the separation criterion applied in the node splitting.
305/// Known:
306/// - GiniIndex
307/// - MisClassificationError
308/// - CrossEntropy
309/// - SDivSqrtSPlusB
310/// - MinNodeSize: minimum percentage of training events in a leaf node (leaf criteria, stop splitting)
311/// - nCuts: the number of steps in the optimisation of the cut for a node (if < 0, then
312/// step size is determined by the events)
313/// - UseFisherCuts: use multivariate splits using the Fisher criterion
314/// - UseYesNoLeaf decide if the classification is done simply by the node type, or the S/B
315/// (from the training) in the leaf node
316/// - NodePurityLimit the minimum purity to classify a node as a signal node (used in pruning and boosting to determine
317/// misclassification error rate)
318/// - PruneMethod The Pruning method.
319/// Known:
320/// - NoPruning // switch off pruning completely
321/// - ExpectedError
322/// - CostComplexity
323/// - PruneStrength a parameter to adjust the amount of pruning. Should be large enough such that overtraining is avoided.
324/// - PruningValFraction number of events to use for optimizing pruning (only if PruneStrength < 0, i.e. automatic pruning)
325/// - NegWeightTreatment
326/// - IgnoreNegWeightsInTraining Ignore negative weight events in the training.
327/// - DecreaseBoostWeight Boost ev. with neg. weight with 1/boostweight instead of boostweight
328/// - PairNegWeightsGlobal Pair ev. with neg. and pos. weights in training sample and "annihilate" them
329/// - MaxDepth maximum depth of the decision tree allowed before further splitting is stopped
330/// - SkipNormalization Skip normalization at initialization, to keep expectation value of BDT output
331/// according to the fraction of events
332
334{
335 DeclareOptionRef(fNTrees, "NTrees", "Number of trees in the forest");
336 if (DoRegression()) {
337 DeclareOptionRef(fMaxDepth=50,"MaxDepth","Max depth of the decision tree allowed");
338 }else{
339 DeclareOptionRef(fMaxDepth=3,"MaxDepth","Max depth of the decision tree allowed");
340 }
341
342 TString tmp="5%"; if (DoRegression()) tmp="0.2%";
343 DeclareOptionRef(fMinNodeSizeS=tmp, "MinNodeSize", "Minimum percentage of training events required in a leaf node (default: Classification: 5%, Regression: 0.2%)");
344 // MinNodeSize: minimum percentage of training events in a leaf node (leaf criteria, stop splitting)
345 DeclareOptionRef(fNCuts, "nCuts", "Number of grid points in variable range used in finding optimal cut in node splitting");
346
347 DeclareOptionRef(fBoostType, "BoostType", "Boosting type for the trees in the forest (note: AdaCost is still experimental)");
348
349 AddPreDefVal(TString("AdaBoost"));
350 AddPreDefVal(TString("RealAdaBoost"));
351 AddPreDefVal(TString("AdaCost"));
352 AddPreDefVal(TString("Bagging"));
353 // AddPreDefVal(TString("RegBoost"));
354 AddPreDefVal(TString("AdaBoostR2"));
355 AddPreDefVal(TString("Grad"));
356 if (DoRegression()) {
357 fBoostType = "AdaBoostR2";
358 }else{
359 fBoostType = "AdaBoost";
360 }
361 DeclareOptionRef(fAdaBoostR2Loss="Quadratic", "AdaBoostR2Loss", "Type of Loss function in AdaBoostR2");
362 AddPreDefVal(TString("Linear"));
363 AddPreDefVal(TString("Quadratic"));
364 AddPreDefVal(TString("Exponential"));
365
366 DeclareOptionRef(fBaggedBoost=kFALSE, "UseBaggedBoost","Use only a random subsample of all events for growing the trees in each boost iteration.");
367 DeclareOptionRef(fShrinkage = 1.0, "Shrinkage", "Learning rate for BoostType=Grad algorithm");
368 DeclareOptionRef(fAdaBoostBeta=.5, "AdaBoostBeta", "Learning rate for AdaBoost algorithm");
369 DeclareOptionRef(fRandomisedTrees,"UseRandomisedTrees","Determine at each node splitting the cut variable only as the best out of a random subset of variables (like in RandomForests)");
370 DeclareOptionRef(fUseNvars,"UseNvars","Size of the subset of variables used with RandomisedTree option");
371 DeclareOptionRef(fUsePoissonNvars,"UsePoissonNvars", "Interpret \"UseNvars\" not as fixed number but as mean of a Poisson distribution in each split with RandomisedTree option");
372 DeclareOptionRef(fBaggedSampleFraction=.6,"BaggedSampleFraction","Relative size of bagged event sample to original size of the data sample (used whenever bagging is used (i.e. UseBaggedBoost, Bagging,)" );
373
374 DeclareOptionRef(fUseYesNoLeaf=kTRUE, "UseYesNoLeaf",
375 "Use Sig or Bkg categories, or the purity=S/(S+B) as classification of the leaf node -> Real-AdaBoost");
376 if (DoRegression()) {
377 fUseYesNoLeaf = kFALSE;
378 }
379
380 DeclareOptionRef(fNegWeightTreatment="InverseBoostNegWeights","NegWeightTreatment","How to treat events with negative weights in the BDT training (particular the boosting) : IgnoreInTraining; Boost With inverse boostweight; Pair events with negative and positive weights in training sample and *annihilate* them (experimental!)");
381 AddPreDefVal(TString("InverseBoostNegWeights"));
382 AddPreDefVal(TString("IgnoreNegWeightsInTraining"));
383 AddPreDefVal(TString("NoNegWeightsInTraining")); // well, let's be nice to users and keep at least this old name anyway ..
384 AddPreDefVal(TString("PairNegWeightsGlobal"));
385 AddPreDefVal(TString("Pray"));
386
387
388
389 DeclareOptionRef(fCss=1., "Css", "AdaCost: cost of true signal selected signal");
390 DeclareOptionRef(fCts_sb=1.,"Cts_sb","AdaCost: cost of true signal selected bkg");
391 DeclareOptionRef(fCtb_ss=1.,"Ctb_ss","AdaCost: cost of true bkg selected signal");
392 DeclareOptionRef(fCbb=1., "Cbb", "AdaCost: cost of true bkg selected bkg ");
393
394 DeclareOptionRef(fNodePurityLimit=0.5, "NodePurityLimit", "In boosting/pruning, nodes with purity > NodePurityLimit are signal; background otherwise.");
395
396
397 DeclareOptionRef(fSepTypeS, "SeparationType", "Separation criterion for node splitting");
398 AddPreDefVal(TString("CrossEntropy"));
399 AddPreDefVal(TString("GiniIndex"));
400 AddPreDefVal(TString("GiniIndexWithLaplace"));
401 AddPreDefVal(TString("MisClassificationError"));
402 AddPreDefVal(TString("SDivSqrtSPlusB"));
403 AddPreDefVal(TString("RegressionVariance"));
404 if (DoRegression()) {
405 fSepTypeS = "RegressionVariance";
406 }else{
407 fSepTypeS = "GiniIndex";
408 }
409
410 DeclareOptionRef(fRegressionLossFunctionBDTGS = "Huber", "RegressionLossFunctionBDTG", "Loss function for BDTG regression.");
411 AddPreDefVal(TString("Huber"));
412 AddPreDefVal(TString("AbsoluteDeviation"));
413 AddPreDefVal(TString("LeastSquares"));
414
415 DeclareOptionRef(fHuberQuantile = 0.7, "HuberQuantile", "In the Huber loss function this is the quantile that separates the core from the tails in the residuals distribution.");
416
417 DeclareOptionRef(fDoBoostMonitor=kFALSE,"DoBoostMonitor","Create control plot with ROC integral vs tree number");
418
419 DeclareOptionRef(fUseFisherCuts=kFALSE, "UseFisherCuts", "Use multivariate splits using the Fisher criterion");
420 DeclareOptionRef(fMinLinCorrForFisher=.8,"MinLinCorrForFisher", "The minimum linear correlation between two variables demanded for use in Fisher criterion in node splitting");
421 DeclareOptionRef(fUseExclusiveVars=kFALSE,"UseExclusiveVars","Variables already used in fisher criterion are not anymore analysed individually for node splitting");
422
423
424 DeclareOptionRef(fDoPreselection=kFALSE,"DoPreselection","and and apply automatic pre-selection for 100% efficient signal (bkg) cuts prior to training");
425
426
427 DeclareOptionRef(fSigToBkgFraction=1,"SigToBkgFraction","Sig to Bkg ratio used in Training (similar to NodePurityLimit, which cannot be used in real adaboost");
428
429 DeclareOptionRef(fPruneMethodS, "PruneMethod", "Note: for BDTs use small trees (e.g.MaxDepth=3) and NoPruning: Pruning: Method used for pruning (removal) of statistically insignificant branches ");
430 AddPreDefVal(TString("NoPruning"));
431 AddPreDefVal(TString("ExpectedError"));
432 AddPreDefVal(TString("CostComplexity"));
433
434 DeclareOptionRef(fPruneStrength, "PruneStrength", "Pruning strength");
435
436 DeclareOptionRef(fFValidationEvents=0.5, "PruningValFraction", "Fraction of events to use for optimizing automatic pruning.");
437
438 DeclareOptionRef(fSkipNormalization=kFALSE, "SkipNormalization", "Skip normalization at initialization, to keep expectation value of BDT output according to the fraction of events");
439
440 // deprecated options, still kept for the moment:
441 DeclareOptionRef(fMinNodeEvents=0, "nEventsMin", "deprecated: Use MinNodeSize (in % of training events) instead");
442
443 DeclareOptionRef(fBaggedGradBoost=kFALSE, "UseBaggedGrad","deprecated: Use *UseBaggedBoost* instead: Use only a random subsample of all events for growing the trees in each iteration.");
444 DeclareOptionRef(fBaggedSampleFraction, "GradBaggingFraction","deprecated: Use *BaggedSampleFraction* instead: Defines the fraction of events to be used in each iteration, e.g. when UseBaggedGrad=kTRUE. ");
445 DeclareOptionRef(fUseNTrainEvents,"UseNTrainEvents","deprecated: Use *BaggedSampleFraction* instead: Number of randomly picked training events used in randomised (and bagged) trees");
446 DeclareOptionRef(fNNodesMax,"NNodesMax","deprecated: Use MaxDepth instead to limit the tree size" );
447
448
449}
450
451////////////////////////////////////////////////////////////////////////////////
452/// Options that are used ONLY for the READER to ensure backward compatibility.
453
456
457
458 DeclareOptionRef(fHistoricBool=kTRUE, "UseWeightedTrees",
459 "Use weighted trees or simple average in classification from the forest");
460 DeclareOptionRef(fHistoricBool=kFALSE, "PruneBeforeBoost", "Flag to prune the tree before applying boosting algorithm");
461 DeclareOptionRef(fHistoricBool=kFALSE,"RenormByClass","Individually re-normalize each event class to the original size after boosting");
462
463 AddPreDefVal(TString("NegWeightTreatment"),TString("IgnoreNegWeights"));
464
465}
466
467////////////////////////////////////////////////////////////////////////////////
468/// The option string is decoded, for available options see "DeclareOptions".
469
471{
472 fSepTypeS.ToLower();
473 if (fSepTypeS == "misclassificationerror") fSepType = new MisClassificationError();
474 else if (fSepTypeS == "giniindex") fSepType = new GiniIndex();
475 else if (fSepTypeS == "giniindexwithlaplace") fSepType = new GiniIndexWithLaplace();
476 else if (fSepTypeS == "crossentropy") fSepType = new CrossEntropy();
477 else if (fSepTypeS == "sdivsqrtsplusb") fSepType = new SdivSqrtSplusB();
478 else if (fSepTypeS == "regressionvariance") fSepType = NULL;
479 else {
480 Log() << kINFO << GetOptions() << Endl;
481 Log() << kFATAL << "<ProcessOptions> unknown Separation Index option " << fSepTypeS << " called" << Endl;
482 }
483
484 if(!(fHuberQuantile >= 0.0 && fHuberQuantile <= 1.0)){
485 Log() << kINFO << GetOptions() << Endl;
486 Log() << kFATAL << "<ProcessOptions> Huber Quantile must be in range [0,1]. Value given, " << fHuberQuantile << ", does not match this criteria" << Endl;
487 }
488
489
490 fRegressionLossFunctionBDTGS.ToLower();
491 if (fRegressionLossFunctionBDTGS == "huber") fRegressionLossFunctionBDTG = new HuberLossFunctionBDT(fHuberQuantile);
492 else if (fRegressionLossFunctionBDTGS == "leastsquares") fRegressionLossFunctionBDTG = new LeastSquaresLossFunctionBDT();
493 else if (fRegressionLossFunctionBDTGS == "absolutedeviation") fRegressionLossFunctionBDTG = new AbsoluteDeviationLossFunctionBDT();
494 else {
495 Log() << kINFO << GetOptions() << Endl;
496 Log() << kFATAL << "<ProcessOptions> unknown Regression Loss Function BDT option " << fRegressionLossFunctionBDTGS << " called" << Endl;
497 }
498
499 fPruneMethodS.ToLower();
500 if (fPruneMethodS == "expectederror") fPruneMethod = DecisionTree::kExpectedErrorPruning;
501 else if (fPruneMethodS == "costcomplexity") fPruneMethod = DecisionTree::kCostComplexityPruning;
502 else if (fPruneMethodS == "nopruning") fPruneMethod = DecisionTree::kNoPruning;
503 else {
504 Log() << kINFO << GetOptions() << Endl;
505 Log() << kFATAL << "<ProcessOptions> unknown PruneMethod " << fPruneMethodS << " option called" << Endl;
506 }
507 if (fPruneStrength < 0 && (fPruneMethod != DecisionTree::kNoPruning) && fBoostType!="Grad") fAutomatic = kTRUE;
508 else fAutomatic = kFALSE;
509 if (fAutomatic && fPruneMethod==DecisionTree::kExpectedErrorPruning){
510 Log() << kFATAL
511 << "Sorry automatic pruning strength determination is not implemented yet for ExpectedErrorPruning" << Endl;
512 }
513
514
515 if (fMinNodeEvents > 0){
516 fMinNodeSize = Double_t(fMinNodeEvents*100.) / Data()->GetNTrainingEvents();
517 Log() << kWARNING << "You have explicitly set ** nEventsMin = " << fMinNodeEvents<<" ** the min absolute number \n"
518 << "of events in a leaf node. This is DEPRECATED, please use the option \n"
519 << "*MinNodeSize* giving the relative number as percentage of training \n"
520 << "events instead. \n"
521 << "nEventsMin="<<fMinNodeEvents<< "--> MinNodeSize="<<fMinNodeSize<<"%"
522 << Endl;
523 Log() << kWARNING << "Note also that explicitly setting *nEventsMin* so far OVERWRITES the option recommended \n"
524 << " *MinNodeSize* = " << fMinNodeSizeS << " option !!" << Endl ;
525 fMinNodeSizeS = TString::Format("%F3.2",fMinNodeSize);
526
527 }else{
528 SetMinNodeSize(fMinNodeSizeS);
529 }
530
531
532 fAdaBoostR2Loss.ToLower();
533
534 if (fBoostType=="Grad") {
535 fPruneMethod = DecisionTree::kNoPruning;
536 if (fNegWeightTreatment=="InverseBoostNegWeights"){
537 Log() << kINFO << "the option NegWeightTreatment=InverseBoostNegWeights does"
538 << " not exist for BoostType=Grad" << Endl;
539 Log() << kINFO << "--> change to new default NegWeightTreatment=Pray" << Endl;
540 Log() << kDEBUG << "i.e. simply keep them as if which should work fine for Grad Boost" << Endl;
541 fNegWeightTreatment="Pray";
542 fNoNegWeightsInTraining=kFALSE;
543 }
544 } else if (fBoostType=="RealAdaBoost"){
545 fBoostType = "AdaBoost";
546 fUseYesNoLeaf = kFALSE;
547 } else if (fBoostType=="AdaCost"){
548 fUseYesNoLeaf = kFALSE;
549 }
550
551 if (fFValidationEvents < 0.0) fFValidationEvents = 0.0;
552 if (fAutomatic && fFValidationEvents > 0.5) {
553 Log() << kWARNING << "You have chosen to use more than half of your training sample "
554 << "to optimize the automatic pruning algorithm. This is probably wasteful "
555 << "and your overall results will be degraded. Are you sure you want this?"
556 << Endl;
557 }
558
559
560 if (this->Data()->HasNegativeEventWeights()){
561 Log() << kINFO << " You are using a Monte Carlo that has also negative weights. "
562 << "That should in principle be fine as long as on average you end up with "
563 << "something positive. For this you have to make sure that the minimal number "
564 << "of (un-weighted) events demanded for a tree node (currently you use: MinNodeSize="
565 << fMinNodeSizeS << " ("<< fMinNodeSize << "%)"
566 <<", (or the deprecated equivalent nEventsMin) you can set this via the "
567 <<"BDT option string when booking the "
568 << "classifier) is large enough to allow for reasonable averaging!!! "
569 << " If this does not help.. maybe you want to try the option: IgnoreNegWeightsInTraining "
570 << "which ignores events with negative weight in the training. " << Endl
571 << Endl << "Note: You'll get a WARNING message during the training if that should ever happen" << Endl;
572 }
573
574 if (DoRegression()) {
575 if (fUseYesNoLeaf && !IsConstructedFromWeightFile()){
576 Log() << kWARNING << "Regression Trees do not work with fUseYesNoLeaf=TRUE --> I will set it to FALSE" << Endl;
577 fUseYesNoLeaf = kFALSE;
578 }
579
580 if (fSepType != NULL){
581 Log() << kWARNING << "Regression Trees do not work with Separation type other than <RegressionVariance> --> I will use it instead" << Endl;
582 fSepType = NULL;
583 }
584 if (fUseFisherCuts){
585 Log() << kWARNING << "Sorry, UseFisherCuts is not available for regression analysis, I will ignore it!" << Endl;
586 fUseFisherCuts = kFALSE;
587 }
588 if (fNCuts < 0) {
589 Log() << kWARNING << "Sorry, the option of nCuts<0 using a more elaborate node splitting algorithm " << Endl;
590 Log() << kWARNING << "is not implemented for regression analysis ! " << Endl;
591 Log() << kWARNING << "--> I switch do default nCuts = 20 and use standard node splitting"<<Endl;
592 fNCuts=20;
593 }
594 }
595 if (fRandomisedTrees){
596 Log() << kINFO << " Randomised trees use no pruning" << Endl;
597 fPruneMethod = DecisionTree::kNoPruning;
598 // fBoostType = "Bagging";
599 }
600
601 if (fUseFisherCuts) {
602 Log() << kWARNING << "When using the option UseFisherCuts, the other option nCuts<0 (i.e. using" << Endl;
603 Log() << " a more elaborate node splitting algorithm) is not implemented. " << Endl;
604 //I will switch o " << Endl;
605 //Log() << "--> I switch do default nCuts = 20 and use standard node splitting WITH possible Fisher criteria"<<Endl;
606 fNCuts=20;
607 }
608
609 if (fNTrees==0){
610 Log() << kERROR << " Zero Decision Trees demanded... that does not work !! "
611 << " I set it to 1 .. just so that the program does not crash"
612 << Endl;
613 fNTrees = 1;
614 }
615
616 fNegWeightTreatment.ToLower();
617 if (fNegWeightTreatment == "ignorenegweightsintraining") fNoNegWeightsInTraining = kTRUE;
618 else if (fNegWeightTreatment == "nonegweightsintraining") fNoNegWeightsInTraining = kTRUE;
619 else if (fNegWeightTreatment == "inverseboostnegweights") fInverseBoostNegWeights = kTRUE;
620 else if (fNegWeightTreatment == "pairnegweightsglobal") fPairNegWeightsGlobal = kTRUE;
621 else if (fNegWeightTreatment == "pray") Log() << kDEBUG << "Yes, good luck with praying " << Endl;
622 else {
623 Log() << kINFO << GetOptions() << Endl;
624 Log() << kFATAL << "<ProcessOptions> unknown option for treating negative event weights during training " << fNegWeightTreatment << " requested" << Endl;
625 }
626
627 if (fNegWeightTreatment == "pairnegweightsglobal")
628 Log() << kWARNING << " you specified the option NegWeightTreatment=PairNegWeightsGlobal : This option is still considered EXPERIMENTAL !! " << Endl;
629
630
631 // dealing with deprecated options !
632 if (fNNodesMax>0) {
633 UInt_t tmp=1; // depth=0 == 1 node
634 fMaxDepth=0;
635 while (tmp < fNNodesMax){
636 tmp+=2*tmp;
637 fMaxDepth++;
638 }
639 Log() << kWARNING << "You have specified a deprecated option *NNodesMax="<<fNNodesMax
640 << "* \n this has been translated to MaxDepth="<<fMaxDepth<<Endl;
641 }
642
643
644 if (fUseNTrainEvents>0){
645 fBaggedSampleFraction = (Double_t) fUseNTrainEvents/Data()->GetNTrainingEvents();
646 Log() << kWARNING << "You have specified a deprecated option *UseNTrainEvents="<<fUseNTrainEvents
647 << "* \n this has been translated to BaggedSampleFraction="<<fBaggedSampleFraction<<"(%)"<<Endl;
648 }
649
650 if (fBoostType=="Bagging") fBaggedBoost = kTRUE;
651 if (fBaggedGradBoost){
652 fBaggedBoost = kTRUE;
653 Log() << kWARNING << "You have specified a deprecated option *UseBaggedGrad* --> please use *UseBaggedBoost* instead" << Endl;
654 }
655
656}
657
658////////////////////////////////////////////////////////////////////////////////
659
661 if (sizeInPercent > 0 && sizeInPercent < 50){
662 fMinNodeSize=sizeInPercent;
663
664 } else {
665 Log() << kFATAL << "you have demanded a minimal node size of "
666 << sizeInPercent << "% of the training events.. \n"
667 << " that somehow does not make sense "<<Endl;
668 }
669
670}
671
672////////////////////////////////////////////////////////////////////////////////
673
675 sizeInPercent.ReplaceAll("%","");
676 sizeInPercent.ReplaceAll(" ","");
677 if (sizeInPercent.IsFloat()) SetMinNodeSize(sizeInPercent.Atof());
678 else {
679 Log() << kFATAL << "I had problems reading the option MinNodeEvents, which "
680 << "after removing a possible % sign now reads " << sizeInPercent << Endl;
681 }
682}
683
684////////////////////////////////////////////////////////////////////////////////
685/// Common initialisation with defaults for the BDT-Method.
686
688{
689 fNTrees = 800;
690 if (fAnalysisType == Types::kClassification || fAnalysisType == Types::kMulticlass ) {
691 fMaxDepth = 3;
692 fBoostType = "AdaBoost";
693 if(DataInfo().GetNClasses()!=0) //workaround for multiclass application
694 fMinNodeSize = 5.;
695 }else {
696 fMaxDepth = 50;
697 fBoostType = "AdaBoostR2";
698 fAdaBoostR2Loss = "Quadratic";
699 if(DataInfo().GetNClasses()!=0) //workaround for multiclass application
700 fMinNodeSize = .2;
701 }
702
703
704 fNCuts = 20;
705 fPruneMethodS = "NoPruning";
706 fPruneMethod = DecisionTree::kNoPruning;
707 fPruneStrength = 0;
708 fAutomatic = kFALSE;
709 fFValidationEvents = 0.5;
710 fRandomisedTrees = kFALSE;
711 // fUseNvars = (GetNvar()>12) ? UInt_t(GetNvar()/8) : TMath::Max(UInt_t(2),UInt_t(GetNvar()/3));
712 fUseNvars = UInt_t(TMath::Sqrt(GetNvar())+0.6);
713 fUsePoissonNvars = kTRUE;
714 fShrinkage = 1.0;
715// fSumOfWeights = 0.0;
716
717 // reference cut value to distinguish signal-like from background-like events
718 SetSignalReferenceCut( 0 );
719}
720
721
722////////////////////////////////////////////////////////////////////////////////
723/// Reset the method, as if it had just been instantiated (forget all training etc.).
724
726{
727 // I keep the BDT EventSample and its Validation sample (eventually they should all
728 // disappear and just use the DataSet samples ..
729
730 // remove all the trees
731 for (UInt_t i=0; i<fForest.size(); i++) delete fForest[i];
732 fForest.clear();
733
734 fBoostWeights.clear();
735 if (fMonitorNtuple) { fMonitorNtuple->Delete(); fMonitorNtuple=NULL; }
736 fVariableImportance.clear();
737 fResiduals.clear();
738 fLossFunctionEventInfo.clear();
739 // now done in "InitEventSample" which is called in "Train"
740 // reset all previously stored/accumulated BOOST weights in the event sample
741 //for (UInt_t iev=0; iev<fEventSample.size(); iev++) fEventSample[iev]->SetBoostWeight(1.);
742 if (Data()) Data()->DeleteResults(GetMethodName(), Types::kTraining, GetAnalysisType());
743 Log() << kDEBUG << " successfully(?) reset the method " << Endl;
744}
745
746
747////////////////////////////////////////////////////////////////////////////////
748/// Destructor.
749///
750/// - Note: fEventSample and ValidationSample are already deleted at the end of TRAIN
751/// When they are not used anymore
752
754{
755 for (UInt_t i=0; i<fForest.size(); i++) delete fForest[i];
756}
757
758////////////////////////////////////////////////////////////////////////////////
759/// Initialize the event sample (i.e. reset the boost-weights... etc).
760
762{
763 if (!HasTrainingTree()) Log() << kFATAL << "<Init> Data().TrainingTree() is zero pointer" << Endl;
764
765 if (fEventSample.size() > 0) { // do not re-initialise the event sample, just set all boostweights to 1. as if it were untouched
766 // reset all previously stored/accumulated BOOST weights in the event sample
767 for (UInt_t iev=0; iev<fEventSample.size(); iev++) fEventSample[iev]->SetBoostWeight(1.);
768 } else {
769 Data()->SetCurrentType(Types::kTraining);
770 UInt_t nevents = Data()->GetNTrainingEvents();
771
772 std::vector<const TMVA::Event*> tmpEventSample;
773 for (Long64_t ievt=0; ievt<nevents; ievt++) {
774 // const Event *event = new Event(*(GetEvent(ievt)));
775 Event* event = new Event( *GetTrainingEvent(ievt) );
776 tmpEventSample.push_back(event);
777 }
778
779 if (!DoRegression()) DeterminePreselectionCuts(tmpEventSample);
780 else fDoPreselection = kFALSE; // just to make sure...
781
782 for (UInt_t i=0; i<tmpEventSample.size(); i++) delete tmpEventSample[i];
783
784
785 Bool_t firstNegWeight=kTRUE;
786 Bool_t firstZeroWeight=kTRUE;
787 for (Long64_t ievt=0; ievt<nevents; ievt++) {
788 // const Event *event = new Event(*(GetEvent(ievt)));
789 // const Event* event = new Event( *GetTrainingEvent(ievt) );
790 Event* event = new Event( *GetTrainingEvent(ievt) );
791 if (fDoPreselection){
792 if (TMath::Abs(ApplyPreselectionCuts(event)) > 0.05) {
793 delete event;
794 continue;
795 }
796 }
797
798 if (event->GetWeight() < 0 && (IgnoreEventsWithNegWeightsInTraining() || fNoNegWeightsInTraining)){
799 if (firstNegWeight) {
800 Log() << kWARNING << " Note, you have events with negative event weight in the sample, but you've chosen to ignore them" << Endl;
801 firstNegWeight=kFALSE;
802 }
803 delete event;
804 }else if (event->GetWeight()==0){
805 if (firstZeroWeight) {
806 firstZeroWeight = kFALSE;
807 Log() << "Events with weight == 0 are going to be simply ignored " << Endl;
808 }
809 delete event;
810 }else{
811 if (event->GetWeight() < 0) {
812 fTrainWithNegWeights=kTRUE;
813 if (firstNegWeight){
814 firstNegWeight = kFALSE;
815 if (fPairNegWeightsGlobal){
816 Log() << kWARNING << "Events with negative event weights are found and "
817 << " will be removed prior to the actual BDT training by global "
818 << " paring (and subsequent annihilation) with positiv weight events"
819 << Endl;
820 }else{
821 Log() << kWARNING << "Events with negative event weights are USED during "
822 << "the BDT training. This might cause problems with small node sizes "
823 << "or with the boosting. Please remove negative events from training "
824 << "using the option *IgnoreEventsWithNegWeightsInTraining* in case you "
825 << "observe problems with the boosting"
826 << Endl;
827 }
828 }
829 }
830 // if fAutomatic == true you need a validation sample to optimize pruning
831 if (fAutomatic) {
832 Double_t modulo = 1.0/(fFValidationEvents);
833 Int_t imodulo = static_cast<Int_t>( fmod(modulo,1.0) > 0.5 ? ceil(modulo) : floor(modulo) );
834 if (ievt % imodulo == 0) fValidationSample.push_back( event );
835 else fEventSample.push_back( event );
836 }
837 else {
838 fEventSample.push_back(event);
839 }
840 }
841 }
842
843 if (fAutomatic) {
844 Log() << kINFO << "<InitEventSample> Internally I use " << fEventSample.size()
845 << " for Training and " << fValidationSample.size()
846 << " for Pruning Validation (" << ((Float_t)fValidationSample.size())/((Float_t)fEventSample.size()+fValidationSample.size())*100.0
847 << "% of training used for validation)" << Endl;
848 }
849
850 // some pre-processing for events with negative weights
851 if (fPairNegWeightsGlobal) PreProcessNegativeEventWeights();
852 }
853
854 if (DoRegression()) {
855 // Regression, no reweighting to do
856 } else if (DoMulticlass()) {
857 // Multiclass, only gradboost is supported. No reweighting.
858 } else if (!fSkipNormalization) {
859 // Binary classification.
860 Log() << kDEBUG << "\t<InitEventSample> For classification trees, "<< Endl;
861 Log() << kDEBUG << " \tthe effective number of backgrounds is scaled to match "<<Endl;
862 Log() << kDEBUG << " \tthe signal. Otherwise the first boosting step would do 'just that'!"<<Endl;
863 // it does not make sense in decision trees to start with unequal number of signal/background
864 // events (weights) .. hence normalize them now (happens otherwise in first 'boosting step'
865 // anyway..
866 // Also make sure, that the sum_of_weights == sample.size() .. as this is assumed in
867 // the DecisionTree to derive a sensible number for "fMinSize" (min.#events in node)
868 // that currently is an OR between "weighted" and "unweighted number"
869 // I want:
870 // nS + nB = n
871 // a*SW + b*BW = n
872 // (a*SW)/(b*BW) = fSigToBkgFraction
873 //
874 // ==> b = n/((1+f)BW) and a = (nf/(1+f))/SW
875
876 Double_t nevents = fEventSample.size();
877 Double_t sumSigW=0, sumBkgW=0;
878 Int_t sumSig=0, sumBkg=0;
879 for (UInt_t ievt=0; ievt<fEventSample.size(); ievt++) {
880 if ((DataInfo().IsSignal(fEventSample[ievt])) ) {
881 sumSigW += fEventSample[ievt]->GetWeight();
882 sumSig++;
883 } else {
884 sumBkgW += fEventSample[ievt]->GetWeight();
885 sumBkg++;
886 }
887 }
888 if (sumSigW && sumBkgW){
889 Double_t normSig = nevents/((1+fSigToBkgFraction)*sumSigW)*fSigToBkgFraction;
890 Double_t normBkg = nevents/((1+fSigToBkgFraction)*sumBkgW); ;
891 Log() << kDEBUG << "\tre-normalise events such that Sig and Bkg have respective sum of weights = "
892 << fSigToBkgFraction << Endl;
893 Log() << kDEBUG << " \tsig->sig*"<<normSig << "ev. bkg->bkg*"<<normBkg << "ev." <<Endl;
894 Log() << kHEADER << "#events: (reweighted) sig: "<< sumSigW*normSig << " bkg: " << sumBkgW*normBkg << Endl;
895 Log() << kINFO << "#events: (unweighted) sig: "<< sumSig << " bkg: " << sumBkg << Endl;
896 for (Long64_t ievt=0; ievt<nevents; ievt++) {
897 if ((DataInfo().IsSignal(fEventSample[ievt])) ) fEventSample[ievt]->SetBoostWeight(normSig);
898 else fEventSample[ievt]->SetBoostWeight(normBkg);
899 }
900 }else{
901 Log() << kINFO << "--> could not determine scaling factors as either there are " << Endl;
902 Log() << kINFO << " no signal events (sumSigW="<<sumSigW<<") or no bkg ev. (sumBkgW="<<sumBkgW<<")"<<Endl;
903 }
904
905 }
906
907 fTrainSample = &fEventSample;
908 if (fBaggedBoost){
909 GetBaggedSubSample(fEventSample);
910 fTrainSample = &fSubSample;
911 }
912
913 //just for debug purposes..
914 /*
915 sumSigW=0;
916 sumBkgW=0;
917 for (UInt_t ievt=0; ievt<fEventSample.size(); ievt++) {
918 if ((DataInfo().IsSignal(fEventSample[ievt])) ) sumSigW += fEventSample[ievt]->GetWeight();
919 else sumBkgW += fEventSample[ievt]->GetWeight();
920 }
921 Log() << kWARNING << "sigSumW="<<sumSigW<<"bkgSumW="<<sumBkgW<< Endl;
922 */
923}
924
925////////////////////////////////////////////////////////////////////////////////
926/// O.k. you know there are events with negative event weights. This routine will remove
927/// them by pairing them with the closest event(s) of the same event class with positive
928/// weights
929/// A first attempt is "brute force", I dont' try to be clever using search trees etc,
930/// just quick and dirty to see if the result is any good
931
933 Double_t totalNegWeights = 0;
934 Double_t totalPosWeights = 0;
935 Double_t totalWeights = 0;
936 std::vector<const Event*> negEvents;
937 for (UInt_t iev = 0; iev < fEventSample.size(); iev++){
938 if (fEventSample[iev]->GetWeight() < 0) {
939 totalNegWeights += fEventSample[iev]->GetWeight();
940 negEvents.push_back(fEventSample[iev]);
941 } else {
942 totalPosWeights += fEventSample[iev]->GetWeight();
943 }
944 totalWeights += fEventSample[iev]->GetWeight();
945 }
946 if (totalNegWeights == 0 ) {
947 Log() << kINFO << "no negative event weights found .. no preprocessing necessary" << Endl;
948 return;
949 } else {
950 Log() << kINFO << "found a total of " << totalNegWeights << " of negative event weights which I am going to try to pair with positive events to annihilate them" << Endl;
951 Log() << kINFO << "found a total of " << totalPosWeights << " of events with positive weights" << Endl;
952 Log() << kINFO << "--> total sum of weights = " << totalWeights << " = " << totalNegWeights+totalPosWeights << Endl;
953 }
954
955 std::vector<TMatrixDSym*>* cov = gTools().CalcCovarianceMatrices( fEventSample, 2);
956
957 TMatrixDSym *invCov;
958
959 for (Int_t i=0; i<2; i++){
960 invCov = ((*cov)[i]);
961 if ( TMath::Abs(invCov->Determinant()) < 10E-24 ) {
962 std::cout << "<MethodBDT::PreProcessNeg...> matrix is almost singular with determinant="
963 << TMath::Abs(invCov->Determinant())
964 << " did you use the variables that are linear combinations or highly correlated?"
965 << std::endl;
966 }
967 if ( TMath::Abs(invCov->Determinant()) < 10E-120 ) {
968 std::cout << "<MethodBDT::PreProcessNeg...> matrix is singular with determinant="
969 << TMath::Abs(invCov->Determinant())
970 << " did you use the variables that are linear combinations?"
971 << std::endl;
972 }
973
974 invCov->Invert();
975 }
976
977
978
979 Log() << kINFO << "Found a total of " << totalNegWeights << " in negative weights out of " << fEventSample.size() << " training events " << Endl;
980 Timer timer(negEvents.size(),"Negative Event paired");
981 for (UInt_t nev = 0; nev < negEvents.size(); nev++){
982 timer.DrawProgressBar( nev );
983 Double_t weight = negEvents[nev]->GetWeight();
984 UInt_t iClassID = negEvents[nev]->GetClass();
985 invCov = ((*cov)[iClassID]);
986 while (weight < 0){
987 // find closest event with positive event weight and "pair" it with the negative event
988 // (add their weight) until there is no negative weight anymore
989 Int_t iMin=-1;
990 Double_t dist, minDist=10E270;
991 for (UInt_t iev = 0; iev < fEventSample.size(); iev++){
992 if (iClassID==fEventSample[iev]->GetClass() && fEventSample[iev]->GetWeight() > 0){
993 dist=0;
994 for (UInt_t ivar=0; ivar < GetNvar(); ivar++){
995 for (UInt_t jvar=0; jvar<GetNvar(); jvar++){
996 dist += (negEvents[nev]->GetValue(ivar)-fEventSample[iev]->GetValue(ivar))*
997 (*invCov)[ivar][jvar]*
998 (negEvents[nev]->GetValue(jvar)-fEventSample[iev]->GetValue(jvar));
999 }
1000 }
1001 if (dist < minDist) { iMin=iev; minDist=dist;}
1002 }
1003 }
1004
1005 if (iMin > -1) {
1006 // std::cout << "Happily pairing .. weight before : " << negEvents[nev]->GetWeight() << " and " << fEventSample[iMin]->GetWeight();
1007 Double_t newWeight = (negEvents[nev]->GetWeight() + fEventSample[iMin]->GetWeight());
1008 if (newWeight > 0){
1009 negEvents[nev]->SetBoostWeight( 0 );
1010 fEventSample[iMin]->SetBoostWeight( newWeight/fEventSample[iMin]->GetOriginalWeight() ); // note the weight*boostweight should be "newWeight"
1011 } else {
1012 negEvents[nev]->SetBoostWeight( newWeight/negEvents[nev]->GetOriginalWeight() ); // note the weight*boostweight should be "newWeight"
1013 fEventSample[iMin]->SetBoostWeight( 0 );
1014 }
1015 // std::cout << " and afterwards " << negEvents[nev]->GetWeight() << " and the paired " << fEventSample[iMin]->GetWeight() << " dist="<<minDist<< std::endl;
1016 } else Log() << kFATAL << "preprocessing didn't find event to pair with the negative weight ... probably a bug" << Endl;
1017 weight = negEvents[nev]->GetWeight();
1018 }
1019 }
1020 Log() << kINFO << "<Negative Event Pairing> took: " << timer.GetElapsedTime()
1021 << " " << Endl;
1022
1023 // just check.. now there should be no negative event weight left anymore
1024 totalNegWeights = 0;
1025 totalPosWeights = 0;
1026 totalWeights = 0;
1027 Double_t sigWeight=0;
1028 Double_t bkgWeight=0;
1029 Int_t nSig=0;
1030 Int_t nBkg=0;
1031
1032 std::vector<const Event*> newEventSample;
1033
1034 for (UInt_t iev = 0; iev < fEventSample.size(); iev++){
1035 if (fEventSample[iev]->GetWeight() < 0) {
1036 totalNegWeights += fEventSample[iev]->GetWeight();
1037 totalWeights += fEventSample[iev]->GetWeight();
1038 } else {
1039 totalPosWeights += fEventSample[iev]->GetWeight();
1040 totalWeights += fEventSample[iev]->GetWeight();
1041 }
1042 if (fEventSample[iev]->GetWeight() > 0) {
1043 newEventSample.push_back(new Event(*fEventSample[iev]));
1044 if (fEventSample[iev]->GetClass() == fSignalClass){
1045 sigWeight += fEventSample[iev]->GetWeight();
1046 nSig+=1;
1047 }else{
1048 bkgWeight += fEventSample[iev]->GetWeight();
1049 nBkg+=1;
1050 }
1051 }
1052 }
1053 if (totalNegWeights < 0) Log() << kFATAL << " compensation of negative event weights with positive ones did not work " << totalNegWeights << Endl;
1054
1055 for (UInt_t i=0; i<fEventSample.size(); i++) delete fEventSample[i];
1056 fEventSample = newEventSample;
1057
1058 Log() << kINFO << " after PreProcessing, the Event sample is left with " << fEventSample.size() << " events (unweighted), all with positive weights, adding up to " << totalWeights << Endl;
1059 Log() << kINFO << " nSig="<<nSig << " sigWeight="<<sigWeight << " nBkg="<<nBkg << " bkgWeight="<<bkgWeight << Endl;
1060
1061
1062}
1063
1064////////////////////////////////////////////////////////////////////////////////
1065/// Call the Optimizer with the set of parameters and ranges that
1066/// are meant to be tuned.
1067
1068std::map<TString,Double_t> TMVA::MethodBDT::OptimizeTuningParameters(TString fomType, TString fitType)
1069{
1070 // fill all the tuning parameters that should be optimized into a map:
1071 std::map<TString,TMVA::Interval*> tuneParameters;
1072 std::map<TString,Double_t> tunedParameters;
1073
1074 // note: the 3rd parameter in the interval is the "number of bins", NOT the stepsize !!
1075 // the actual VALUES at (at least for the scan, guess also in GA) are always
1076 // read from the middle of the bins. Hence.. the choice of Intervals e.g. for the
1077 // MaxDepth, in order to make nice integer values!!!
1078
1079 // find some reasonable ranges for the optimisation of MinNodeEvents:
1080
1081 tuneParameters.insert(std::pair<TString,Interval*>("NTrees", new Interval(10,1000,5))); // stepsize 50
1082 tuneParameters.insert(std::pair<TString,Interval*>("MaxDepth", new Interval(2,4,3))); // stepsize 1
1083 tuneParameters.insert(std::pair<TString,Interval*>("MinNodeSize", new LogInterval(1,30,30))); //
1084 //tuneParameters.insert(std::pair<TString,Interval*>("NodePurityLimit",new Interval(.4,.6,3))); // stepsize .1
1085 //tuneParameters.insert(std::pair<TString,Interval*>("BaggedSampleFraction",new Interval(.4,.9,6))); // stepsize .1
1086
1087 // method-specific parameters
1088 if (fBoostType=="AdaBoost"){
1089 tuneParameters.insert(std::pair<TString,Interval*>("AdaBoostBeta", new Interval(.2,1.,5)));
1090
1091 }else if (fBoostType=="Grad"){
1092 tuneParameters.insert(std::pair<TString,Interval*>("Shrinkage", new Interval(0.05,0.50,5)));
1093
1094 }else if (fBoostType=="Bagging" && fRandomisedTrees){
1095 Int_t min_var = TMath::FloorNint( GetNvar() * .25 );
1096 Int_t max_var = TMath::CeilNint( GetNvar() * .75 );
1097 tuneParameters.insert(std::pair<TString,Interval*>("UseNvars", new Interval(min_var,max_var,4)));
1098
1099 }
1100
1101 Log()<<kINFO << " the following BDT parameters will be tuned on the respective *grid*\n"<<Endl;
1102 std::map<TString,TMVA::Interval*>::iterator it;
1103 for(it=tuneParameters.begin(); it!= tuneParameters.end(); ++it){
1104 Log() << kWARNING << it->first << Endl;
1105 std::ostringstream oss;
1106 (it->second)->Print(oss);
1107 Log()<<oss.str();
1108 Log()<<Endl;
1109 }
1110
1111 OptimizeConfigParameters optimize(this, tuneParameters, fomType, fitType);
1112 tunedParameters=optimize.optimize();
1113
1114 return tunedParameters;
1115
1116}
1117
1118////////////////////////////////////////////////////////////////////////////////
1119/// Set the tuning parameters according to the argument.
1120
1121void TMVA::MethodBDT::SetTuneParameters(std::map<TString,Double_t> tuneParameters)
1122{
1123 std::map<TString,Double_t>::iterator it;
1124 for(it=tuneParameters.begin(); it!= tuneParameters.end(); ++it){
1125 Log() << kWARNING << it->first << " = " << it->second << Endl;
1126 if (it->first == "MaxDepth" ) SetMaxDepth ((Int_t)it->second);
1127 else if (it->first == "MinNodeSize" ) SetMinNodeSize (it->second);
1128 else if (it->first == "NTrees" ) SetNTrees ((Int_t)it->second);
1129 else if (it->first == "NodePurityLimit") SetNodePurityLimit (it->second);
1130 else if (it->first == "AdaBoostBeta" ) SetAdaBoostBeta (it->second);
1131 else if (it->first == "Shrinkage" ) SetShrinkage (it->second);
1132 else if (it->first == "UseNvars" ) SetUseNvars ((Int_t)it->second);
1133 else if (it->first == "BaggedSampleFraction" ) SetBaggedSampleFraction (it->second);
1134 else Log() << kFATAL << " SetParameter for " << it->first << " not yet implemented " <<Endl;
1135 }
1136}
1137
1138////////////////////////////////////////////////////////////////////////////////
1139/// BDT training.
1140
1141
1143{
1145
1146 // fill the STL Vector with the event sample
1147 // (needs to be done here and cannot be done in "init" as the options need to be
1148 // known).
1149 InitEventSample();
1150
1151 if (fNTrees==0){
1152 Log() << kERROR << " Zero Decision Trees demanded... that does not work !! "
1153 << " I set it to 1 .. just so that the program does not crash"
1154 << Endl;
1155 fNTrees = 1;
1156 }
1157
1158 if (fInteractive && fInteractive->NotInitialized()){
1159 std::vector<TString> titles = {"Boost weight", "Error Fraction"};
1160 fInteractive->Init(titles);
1161 }
1162 fIPyMaxIter = fNTrees;
1163 fExitFromTraining = false;
1164
1165 // HHV (it's been here since looong but I really don't know why we cannot handle
1166 // normalized variables in BDTs... todo
1167 if (IsNormalised()) Log() << kFATAL << "\"Normalise\" option cannot be used with BDT; "
1168 << "please remove the option from the configuration string, or "
1169 << "use \"!Normalise\""
1170 << Endl;
1171
1172 if(DoRegression())
1173 Log() << kINFO << "Regression Loss Function: "<< fRegressionLossFunctionBDTG->Name() << Endl;
1174
1175 Log() << kINFO << "Training "<< fNTrees << " Decision Trees ... patience please" << Endl;
1176
1177 Log() << kDEBUG << "Training with maximal depth = " <<fMaxDepth
1178 << ", MinNodeEvents=" << fMinNodeEvents
1179 << ", NTrees="<<fNTrees
1180 << ", NodePurityLimit="<<fNodePurityLimit
1181 << ", AdaBoostBeta="<<fAdaBoostBeta
1182 << Endl;
1183
1184 // weights applied in boosting
1185 Int_t nBins;
1186 Double_t xMin,xMax;
1187 TString hname = "AdaBooost weight distribution";
1188
1189 nBins= 100;
1190 xMin = 0;
1191 xMax = 30;
1192
1193 if (DoRegression()) {
1194 nBins= 100;
1195 xMin = 0;
1196 xMax = 1;
1197 hname="Boost event weights distribution";
1198 }
1199
1200 // book monitoring histograms (for AdaBost only)
1201
1202 TH1* h = new TH1F(TString::Format("%s_BoostWeight",DataInfo().GetName()).Data(),hname,nBins,xMin,xMax);
1203 TH1* nodesBeforePruningVsTree = new TH1I(TString::Format("%s_NodesBeforePruning",DataInfo().GetName()).Data(),"nodes before pruning",fNTrees,0,fNTrees);
1204 TH1* nodesAfterPruningVsTree = new TH1I(TString::Format("%s_NodesAfterPruning",DataInfo().GetName()).Data(),"nodes after pruning",fNTrees,0,fNTrees);
1205
1206
1207 if(!DoMulticlass()){
1208 Results* results = Data()->GetResults(GetMethodName(), Types::kTraining, GetAnalysisType());
1209
1210 h->SetXTitle("boost weight");
1211 results->Store(h, "BoostWeights");
1212
1213
1214 // Monitor the performance (on TEST sample) versus number of trees
1215 if (fDoBoostMonitor){
1216 TH2* boostMonitor = new TH2F("BoostMonitor","ROC Integral Vs iTree",2,0,fNTrees,2,0,1.05);
1217 boostMonitor->SetXTitle("#tree");
1218 boostMonitor->SetYTitle("ROC Integral");
1219 results->Store(boostMonitor, "BoostMonitor");
1220 TGraph *boostMonitorGraph = new TGraph();
1221 boostMonitorGraph->SetName("BoostMonitorGraph");
1222 boostMonitorGraph->SetTitle("ROCIntegralVsNTrees");
1223 results->Store(boostMonitorGraph, "BoostMonitorGraph");
1224 }
1225
1226 // weights applied in boosting vs tree number
1227 h = new TH1F("BoostWeightVsTree","Boost weights vs tree",fNTrees,0,fNTrees);
1228 h->SetXTitle("#tree");
1229 h->SetYTitle("boost weight");
1230 results->Store(h, "BoostWeightsVsTree");
1231
1232 // error fraction vs tree number
1233 h = new TH1F("ErrFractHist","error fraction vs tree number",fNTrees,0,fNTrees);
1234 h->SetXTitle("#tree");
1235 h->SetYTitle("error fraction");
1236 results->Store(h, "ErrorFrac");
1237
1238 // nNodesBeforePruning vs tree number
1239 nodesBeforePruningVsTree->SetXTitle("#tree");
1240 nodesBeforePruningVsTree->SetYTitle("#tree nodes");
1241 results->Store(nodesBeforePruningVsTree);
1242
1243 // nNodesAfterPruning vs tree number
1244 nodesAfterPruningVsTree->SetXTitle("#tree");
1245 nodesAfterPruningVsTree->SetYTitle("#tree nodes");
1246 results->Store(nodesAfterPruningVsTree);
1247
1248 }
1249
1250 fMonitorNtuple= new TTree("MonitorNtuple","BDT variables");
1251 fMonitorNtuple->Branch("iTree",&fITree,"iTree/I");
1252 fMonitorNtuple->Branch("boostWeight",&fBoostWeight,"boostWeight/D");
1253 fMonitorNtuple->Branch("errorFraction",&fErrorFraction,"errorFraction/D");
1254
1255 Timer timer( fNTrees, GetName() );
1256 Int_t nNodesBeforePruningCount = 0;
1257 Int_t nNodesAfterPruningCount = 0;
1258
1259 Int_t nNodesBeforePruning = 0;
1260 Int_t nNodesAfterPruning = 0;
1261
1262 if(fBoostType=="Grad"){
1263 InitGradBoost(fEventSample);
1264 }
1265
1266 Int_t itree=0;
1267 Bool_t continueBoost=kTRUE;
1268 //for (int itree=0; itree<fNTrees; itree++) {
1269
1270 while (itree < fNTrees && continueBoost){
1271 if (fExitFromTraining) break;
1272 fIPyCurrentIter = itree;
1273 timer.DrawProgressBar( itree );
1274 // Results* results = Data()->GetResults(GetMethodName(), Types::kTraining, GetAnalysisType());
1275 // TH1 *hxx = new TH1F(TString::Format("swdist%d",itree),TString::Format("swdist%d",itree),10000,0,15);
1276 // results->Store(hxx,TString::Format("swdist%d",itree));
1277 // TH1 *hxy = new TH1F(TString::Format("bwdist%d",itree),TString::Format("bwdist%d",itree),10000,0,15);
1278 // results->Store(hxy,TString::Format("bwdist%d",itree));
1279 // for (Int_t iev=0; iev<fEventSample.size(); iev++) {
1280 // if (fEventSample[iev]->GetClass()!=0) hxy->Fill((fEventSample[iev])->GetWeight());
1281 // else hxx->Fill((fEventSample[iev])->GetWeight());
1282 // }
1283
1284 if(DoMulticlass()){
1285 if (fBoostType!="Grad"){
1286 Log() << kFATAL << "Multiclass is currently only supported by gradient boost. "
1287 << "Please change boost option accordingly (BoostType=Grad)." << Endl;
1288 }
1289
1290 UInt_t nClasses = DataInfo().GetNClasses();
1291 for (UInt_t i=0;i<nClasses;i++){
1292 // Careful: If fSepType is nullptr, the tree will be considered a regression tree and
1293 // use the correct output for gradboost (response rather than yesnoleaf) in checkEvent.
1294 // See TMVA::MethodBDT::InitGradBoost.
1295 fForest.push_back( new DecisionTree( fSepType, fMinNodeSize, fNCuts, &(DataInfo()), i,
1296 fRandomisedTrees, fUseNvars, fUsePoissonNvars, fMaxDepth,
1297 itree*nClasses+i, fNodePurityLimit, itree*nClasses+1));
1298 fForest.back()->SetNVars(GetNvar());
1299 if (fUseFisherCuts) {
1300 fForest.back()->SetUseFisherCuts();
1301 fForest.back()->SetMinLinCorrForFisher(fMinLinCorrForFisher);
1302 fForest.back()->SetUseExclusiveVars(fUseExclusiveVars);
1303 }
1304 // the minimum linear correlation between two variables demanded for use in fisher criterion in node splitting
1305
1306 nNodesBeforePruning = fForest.back()->BuildTree(*fTrainSample);
1307 Double_t bw = this->Boost(*fTrainSample, fForest.back(),i);
1308 if (bw > 0) {
1309 fBoostWeights.push_back(bw);
1310 }else{
1311 fBoostWeights.push_back(0);
1312 Log() << kWARNING << "stopped boosting at itree="<<itree << Endl;
1313 // fNTrees = itree+1; // that should stop the boosting
1314 continueBoost=kFALSE;
1315 }
1316 }
1317 }
1318 else{
1319
1320 DecisionTree* dt = new DecisionTree( fSepType, fMinNodeSize, fNCuts, &(DataInfo()), fSignalClass,
1321 fRandomisedTrees, fUseNvars, fUsePoissonNvars, fMaxDepth,
1322 itree, fNodePurityLimit, itree);
1323
1324 fForest.push_back(dt);
1325 fForest.back()->SetNVars(GetNvar());
1326 if (fUseFisherCuts) {
1327 fForest.back()->SetUseFisherCuts();
1328 fForest.back()->SetMinLinCorrForFisher(fMinLinCorrForFisher);
1329 fForest.back()->SetUseExclusiveVars(fUseExclusiveVars);
1330 }
1331
1332 nNodesBeforePruning = fForest.back()->BuildTree(*fTrainSample);
1333
1334 if (fUseYesNoLeaf && !DoRegression() && fBoostType!="Grad") { // remove leaf nodes where both daughter nodes are of same type
1335 nNodesBeforePruning = fForest.back()->CleanTree();
1336 }
1337
1338 nNodesBeforePruningCount += nNodesBeforePruning;
1339 nodesBeforePruningVsTree->SetBinContent(itree+1,nNodesBeforePruning);
1340
1341 fForest.back()->SetPruneMethod(fPruneMethod); // set the pruning method for the tree
1342 fForest.back()->SetPruneStrength(fPruneStrength); // set the strength parameter
1343
1344 std::vector<const Event*> * validationSample = NULL;
1345 if(fAutomatic) validationSample = &fValidationSample;
1346 Double_t bw = this->Boost(*fTrainSample, fForest.back());
1347 if (bw > 0) {
1348 fBoostWeights.push_back(bw);
1349 }else{
1350 fBoostWeights.push_back(0);
1351 Log() << kWARNING << "stopped boosting at itree="<<itree << Endl;
1352 continueBoost=kFALSE;
1353 }
1354
1355 // if fAutomatic == true, pruneStrength will be the optimal pruning strength
1356 // determined by the pruning algorithm; otherwise, it is simply the strength parameter
1357 // set by the user
1358 if (fPruneMethod != DecisionTree::kNoPruning) fForest.back()->PruneTree(validationSample);
1359
1360 if (fUseYesNoLeaf && !DoRegression() && fBoostType!="Grad"){ // remove leaf nodes where both daughter nodes are of same type
1361 fForest.back()->CleanTree();
1362 }
1363 nNodesAfterPruning = fForest.back()->GetNNodes();
1364 nNodesAfterPruningCount += nNodesAfterPruning;
1365 nodesAfterPruningVsTree->SetBinContent(itree+1,nNodesAfterPruning);
1366
1367 if (fInteractive){
1368 fInteractive->AddPoint(itree, fBoostWeight, fErrorFraction);
1369 }
1370 fITree = itree;
1371 fMonitorNtuple->Fill();
1372 if (fDoBoostMonitor){
1373 if (! DoRegression() ){
1374 if ( itree==fNTrees-1 || (!(itree%500)) ||
1375 (!(itree%250) && itree <1000)||
1376 (!(itree%100) && itree < 500)||
1377 (!(itree%50) && itree < 250)||
1378 (!(itree%25) && itree < 150)||
1379 (!(itree%10) && itree < 50)||
1380 (!(itree%5) && itree < 20)
1381 ) BoostMonitor(itree);
1382 }
1383 }
1384 }
1385 itree++;
1386 }
1387
1388 // get elapsed time
1389 Log() << kDEBUG << "\t<Train> elapsed time: " << timer.GetElapsedTime()
1390 << " " << Endl;
1391 if (fPruneMethod == DecisionTree::kNoPruning) {
1392 Log() << kDEBUG << "\t<Train> average number of nodes (w/o pruning) : "
1393 << nNodesBeforePruningCount/GetNTrees() << Endl;
1394 }
1395 else {
1396 Log() << kDEBUG << "\t<Train> average number of nodes before/after pruning : "
1397 << nNodesBeforePruningCount/GetNTrees() << " / "
1398 << nNodesAfterPruningCount/GetNTrees()
1399 << Endl;
1400 }
1402
1403
1404 // reset all previously stored/accumulated BOOST weights in the event sample
1405 // for (UInt_t iev=0; iev<fEventSample.size(); iev++) fEventSample[iev]->SetBoostWeight(1.);
1406 Log() << kDEBUG << "Now I delete the privat data sample"<< Endl;
1407 for (UInt_t i=0; i<fEventSample.size(); i++) delete fEventSample[i];
1408 for (UInt_t i=0; i<fValidationSample.size(); i++) delete fValidationSample[i];
1409 fEventSample.clear();
1410 fValidationSample.clear();
1411
1412 if (!fExitFromTraining) fIPyMaxIter = fIPyCurrentIter;
1413 ExitFromTraining();
1414}
1415
1416
1417////////////////////////////////////////////////////////////////////////////////
1418/// Returns MVA value: -1 for background, 1 for signal.
1419
1421{
1422 Double_t sum=0;
1423 for (UInt_t itree=0; itree<nTrees; itree++) {
1424 //loop over all trees in forest
1425 sum += fForest[itree]->CheckEvent(e,kFALSE);
1426
1427 }
1428 return 2.0/(1.0+exp(-2.0*sum))-1; //MVA output between -1 and 1
1429}
1430
1431////////////////////////////////////////////////////////////////////////////////
1432/// Calculate residual for all events.
1433
1434void TMVA::MethodBDT::UpdateTargets(std::vector<const TMVA::Event*>& eventSample, UInt_t cls)
1435{
1436 if (DoMulticlass()) {
1437 UInt_t nClasses = DataInfo().GetNClasses();
1438 Bool_t isLastClass = (cls == nClasses - 1);
1439
1440 #ifdef R__USE_IMT
1441 //
1442 // This is the multi-threaded multiclass version
1443 //
1444 // Note: we only need to update the predicted probabilities every
1445 // `nClasses` tree. Let's call a set of `nClasses` trees a "round". Thus
1446 // the algortihm is split in two parts `update_residuals` and
1447 // `update_residuals_last` where the latter is inteded to be run instead
1448 // of the former for the last tree in a "round".
1449 //
1450 std::map<const TMVA::Event *, std::vector<double>> & residuals = this->fResiduals;
1451 DecisionTree & lastTree = *(this->fForest.back());
1452
1453 auto update_residuals = [&residuals, &lastTree, cls](const TMVA::Event * e) {
1454 residuals[e].at(cls) += lastTree.CheckEvent(e, kFALSE);
1455 };
1456
1457 auto update_residuals_last = [&residuals, &lastTree, cls, nClasses](const TMVA::Event * e) {
1458 residuals[e].at(cls) += lastTree.CheckEvent(e, kFALSE);
1459
1460 auto &residualsThisEvent = residuals[e];
1461
1462 std::vector<Double_t> expCache(nClasses, 0.0);
1463 std::transform(residualsThisEvent.begin(),
1464 residualsThisEvent.begin() + nClasses,
1465 expCache.begin(), [](Double_t d) { return exp(d); });
1466
1467 Double_t exp_sum = std::accumulate(expCache.begin(),
1468 expCache.begin() + nClasses,
1469 0.0);
1470
1471 for (UInt_t i = 0; i < nClasses; i++) {
1472 Double_t p_cls = expCache[i] / exp_sum;
1473
1474 Double_t res = (e->GetClass() == i) ? (1.0 - p_cls) : (-p_cls);
1475 const_cast<TMVA::Event *>(e)->SetTarget(i, res);
1476 }
1477 };
1478
1479 if (isLastClass) {
1481 .Foreach(update_residuals_last, eventSample);
1482 } else {
1484 .Foreach(update_residuals, eventSample);
1485 }
1486 #else
1487 //
1488 // Single-threaded multiclass version
1489 //
1490 std::vector<Double_t> expCache;
1491 if (isLastClass) {
1492 expCache.resize(nClasses);
1493 }
1494
1495 for (auto e : eventSample) {
1496 fResiduals[e].at(cls) += fForest.back()->CheckEvent(e, kFALSE);
1497 if (isLastClass) {
1498 auto &residualsThisEvent = fResiduals[e];
1499 std::transform(residualsThisEvent.begin(),
1500 residualsThisEvent.begin() + nClasses,
1501 expCache.begin(), [](Double_t d) { return exp(d); });
1502
1503 Double_t exp_sum = std::accumulate(expCache.begin(),
1504 expCache.begin() + nClasses,
1505 0.0);
1506
1507 for (UInt_t i = 0; i < nClasses; i++) {
1508 Double_t p_cls = expCache[i] / exp_sum;
1509
1510 Double_t res = (e->GetClass() == i) ? (1.0 - p_cls) : (-p_cls);
1511 const_cast<TMVA::Event *>(e)->SetTarget(i, res);
1512 }
1513 }
1514 }
1515 #endif
1516 } else {
1517 std::map<const TMVA::Event *, std::vector<double>> & residuals = this->fResiduals;
1518 DecisionTree & lastTree = *(this->fForest.back());
1519
1520 UInt_t signalClass = DataInfo().GetSignalClassIndex();
1521
1522 #ifdef R__USE_IMT
1523 auto update_residuals = [&residuals, &lastTree, signalClass](const TMVA::Event * e) {
1524 double & residualAt0 = residuals[e].at(0);
1525 residualAt0 += lastTree.CheckEvent(e, kFALSE);
1526
1527 Double_t p_sig = 1.0 / (1.0 + exp(-2.0 * residualAt0));
1528 Double_t res = ((e->GetClass() == signalClass) ? (1.0 - p_sig) : (-p_sig));
1529
1530 const_cast<TMVA::Event *>(e)->SetTarget(0, res);
1531 };
1532
1534 .Foreach(update_residuals, eventSample);
1535 #else
1536 for (auto e : eventSample) {
1537 double & residualAt0 = residuals[e].at(0);
1538 residualAt0 += lastTree.CheckEvent(e, kFALSE);
1539
1540 Double_t p_sig = 1.0 / (1.0 + exp(-2.0 * residualAt0));
1541 Double_t res = ((e->GetClass() == signalClass) ? (1.0 - p_sig) : (-p_sig));
1542
1543 const_cast<TMVA::Event *>(e)->SetTarget(0, res);
1544 }
1545 #endif
1546 }
1547}
1548
1549////////////////////////////////////////////////////////////////////////////////
1550/// \brief Calculate residuals for all events and update targets for next iter.
1551///
1552/// \param[in] eventSample The collection of events currently under training.
1553/// \param[in] first Should be true when called before the first boosting
1554/// iteration has been run
1555///
1556void TMVA::MethodBDT::UpdateTargetsRegression(std::vector<const TMVA::Event*>& eventSample, Bool_t first)
1557{
1558 if (!first) {
1559#ifdef R__USE_IMT
1561 auto seeds = ROOT::TSeqU(nPartitions);
1562
1563 // need a lambda function to pass to TThreadExecutor::MapReduce
1564 auto f = [this, &nPartitions](UInt_t partition = 0) -> Int_t {
1565 Int_t start = 1.0 * partition / nPartitions * this->fEventSample.size();
1566 Int_t end = (partition + 1.0) / nPartitions * this->fEventSample.size();
1567
1568 for (Int_t i = start; i < end; ++i) {
1569 const TMVA::Event *e = fEventSample[i];
1570 LossFunctionEventInfo & lossInfo = fLossFunctionEventInfo.at(e);
1571 lossInfo.predictedValue += fForest.back()->CheckEvent(e, kFALSE);
1572 }
1573
1574 return 0;
1575 };
1576
1578#else
1579 for (const TMVA::Event *e : fEventSample) {
1580 LossFunctionEventInfo & lossInfo = fLossFunctionEventInfo.at(e);
1581 lossInfo.predictedValue += fForest.back()->CheckEvent(e, kFALSE);
1582 }
1583#endif
1584 }
1585
1586 // NOTE: Set targets are also parallelised internally
1587 fRegressionLossFunctionBDTG->SetTargets(eventSample, fLossFunctionEventInfo);
1588
1589}
1590
1591////////////////////////////////////////////////////////////////////////////////
1592/// Calculate the desired response value for each region.
1593
1594Double_t TMVA::MethodBDT::GradBoost(std::vector<const TMVA::Event*>& eventSample, DecisionTree *dt, UInt_t cls)
1595{
1596 struct LeafInfo {
1597 Double_t sumWeightTarget = 0;
1598 Double_t sum2 = 0;
1599 };
1600
1601 std::unordered_map<TMVA::DecisionTreeNode*, LeafInfo> leaves;
1602 for (auto e : eventSample) {
1603 Double_t weight = e->GetWeight();
1605 auto &v = leaves[node];
1606 auto target = e->GetTarget(cls);
1607 v.sumWeightTarget += target * weight;
1608 v.sum2 += fabs(target) * (1.0 - fabs(target)) * weight;
1609 }
1610 for (auto &iLeave : leaves) {
1611 constexpr auto minValue = 1e-30;
1612 if (iLeave.second.sum2 < minValue) {
1613 iLeave.second.sum2 = minValue;
1614 }
1615 const Double_t K = DataInfo().GetNClasses();
1616 iLeave.first->SetResponse(fShrinkage * (K - 1) / K * iLeave.second.sumWeightTarget / iLeave.second.sum2);
1617 }
1618
1619 //call UpdateTargets before next tree is grown
1620
1621 DoMulticlass() ? UpdateTargets(fEventSample, cls) : UpdateTargets(fEventSample);
1622 return 1; //trees all have the same weight
1623}
1624
1625////////////////////////////////////////////////////////////////////////////////
1626/// Implementation of M_TreeBoost using any loss function as described by Friedman 1999.
1627
1628Double_t TMVA::MethodBDT::GradBoostRegression(std::vector<const TMVA::Event*>& eventSample, DecisionTree *dt )
1629{
1630 // get the vector of events for each terminal so that we can calculate the constant fit value in each
1631 // terminal node
1632 // #### Not sure how many events are in each node in advance, so I can't parallelize this easily
1633 std::map<TMVA::DecisionTreeNode*,vector< TMVA::LossFunctionEventInfo > > leaves;
1634 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
1635 TMVA::DecisionTreeNode* node = dt->GetEventNode(*(*e));
1636 (leaves[node]).push_back(fLossFunctionEventInfo[*e]);
1637 }
1638
1639 // calculate the constant fit for each terminal node based upon the events in the node
1640 // node (iLeave->first), vector of event information (iLeave->second)
1641 // #### could parallelize this and do the leaves at the same time, but this doesn't take very long compared
1642 // #### to the other processes
1643 for (std::map<TMVA::DecisionTreeNode*,vector< TMVA::LossFunctionEventInfo > >::iterator iLeave=leaves.begin();
1644 iLeave!=leaves.end();++iLeave){
1645 Double_t fit = fRegressionLossFunctionBDTG->Fit(iLeave->second);
1646 (iLeave->first)->SetResponse(fShrinkage*fit);
1647 }
1648
1649 UpdateTargetsRegression(*fTrainSample);
1650
1651 return 1;
1652}
1653
1654////////////////////////////////////////////////////////////////////////////////
1655/// Initialize targets for first tree.
1656
1657void TMVA::MethodBDT::InitGradBoost( std::vector<const TMVA::Event*>& eventSample)
1658{
1659 // Should get rid of this line. It's just for debugging.
1660 //std::sort(eventSample.begin(), eventSample.end(), [](const TMVA::Event* a, const TMVA::Event* b){
1661 // return (a->GetTarget(0) < b->GetTarget(0)); });
1662 fSepType=NULL; //set fSepType to NULL (regression trees are used for both classification an regression)
1663 if(DoRegression()){
1664 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
1665 fLossFunctionEventInfo[*e]= TMVA::LossFunctionEventInfo((*e)->GetTarget(0), 0, (*e)->GetWeight());
1666 }
1667
1668 fRegressionLossFunctionBDTG->Init(fLossFunctionEventInfo, fBoostWeights);
1669 UpdateTargetsRegression(*fTrainSample,kTRUE);
1670
1671 return;
1672 }
1673 else if(DoMulticlass()){
1674 UInt_t nClasses = DataInfo().GetNClasses();
1675 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
1676 for (UInt_t i=0;i<nClasses;i++){
1677 //Calculate initial residua, assuming equal probability for all classes
1678 Double_t r = (*e)->GetClass()==i?(1-1.0/nClasses):(-1.0/nClasses);
1679 const_cast<TMVA::Event*>(*e)->SetTarget(i,r);
1680 fResiduals[*e].push_back(0);
1681 }
1682 }
1683 }
1684 else{
1685 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
1686 Double_t r = (DataInfo().IsSignal(*e)?1:0)-0.5; //Calculate initial residua
1687 const_cast<TMVA::Event*>(*e)->SetTarget(0,r);
1688 fResiduals[*e].push_back(0);
1689 }
1690 }
1691
1692}
1693////////////////////////////////////////////////////////////////////////////////
1694/// Test the tree quality.. in terms of Misclassification.
1695
1697{
1698 Double_t ncorrect=0, nfalse=0;
1699 for (UInt_t ievt=0; ievt<fValidationSample.size(); ievt++) {
1700 Bool_t isSignalType= (dt->CheckEvent(fValidationSample[ievt]) > fNodePurityLimit ) ? 1 : 0;
1701
1702 if (isSignalType == (DataInfo().IsSignal(fValidationSample[ievt])) ) {
1703 ncorrect += fValidationSample[ievt]->GetWeight();
1704 }
1705 else{
1706 nfalse += fValidationSample[ievt]->GetWeight();
1707 }
1708 }
1709
1710 return ncorrect / (ncorrect + nfalse);
1711}
1712
1713////////////////////////////////////////////////////////////////////////////////
1714/// Apply the boosting algorithm (the algorithm is selecte via the "option" given
1715/// in the constructor. The return value is the boosting weight.
1716
1717Double_t TMVA::MethodBDT::Boost( std::vector<const TMVA::Event*>& eventSample, DecisionTree *dt, UInt_t cls )
1718{
1719 Double_t returnVal=-1;
1720
1721 if (fBoostType=="AdaBoost") returnVal = this->AdaBoost (eventSample, dt);
1722 else if (fBoostType=="AdaCost") returnVal = this->AdaCost (eventSample, dt);
1723 else if (fBoostType=="Bagging") returnVal = this->Bagging ( );
1724 else if (fBoostType=="RegBoost") returnVal = this->RegBoost (eventSample, dt);
1725 else if (fBoostType=="AdaBoostR2") returnVal = this->AdaBoostR2(eventSample, dt);
1726 else if (fBoostType=="Grad"){
1727 if(DoRegression())
1728 returnVal = this->GradBoostRegression(eventSample, dt);
1729 else if(DoMulticlass())
1730 returnVal = this->GradBoost (eventSample, dt, cls);
1731 else
1732 returnVal = this->GradBoost (eventSample, dt);
1733 }
1734 else {
1735 Log() << kINFO << GetOptions() << Endl;
1736 Log() << kFATAL << "<Boost> unknown boost option " << fBoostType<< " called" << Endl;
1737 }
1738
1739 if (fBaggedBoost){
1740 GetBaggedSubSample(fEventSample);
1741 }
1742
1743
1744 return returnVal;
1745}
1746
1747////////////////////////////////////////////////////////////////////////////////
1748/// Fills the ROCIntegral vs Itree from the testSample for the monitoring plots
1749/// during the training .. but using the testing events
1750
1752{
1753 Results* results = Data()->GetResults(GetMethodName(),Types::kTraining, Types::kMaxAnalysisType);
1754
1755 TH1F *tmpS = new TH1F( "tmpS", "", 100 , -1., 1.00001 );
1756 TH1F *tmpB = new TH1F( "tmpB", "", 100 , -1., 1.00001 );
1757 TH1F *tmp;
1758
1759
1760 UInt_t signalClassNr = DataInfo().GetClassInfo("Signal")->GetNumber();
1761
1762 // const std::vector<Event*> events=Data()->GetEventCollection(Types::kTesting);
1763 // // fMethod->GetTransformationHandler().CalcTransformations(fMethod->Data()->GetEventCollection(Types::kTesting));
1764 // for (UInt_t iev=0; iev < events.size() ; iev++){
1765 // if (events[iev]->GetClass() == signalClassNr) tmp=tmpS;
1766 // else tmp=tmpB;
1767 // tmp->Fill(PrivateGetMvaValue(*(events[iev])),events[iev]->GetWeight());
1768 // }
1769
1770 UInt_t nevents = Data()->GetNTestEvents();
1771 for (UInt_t iev=0; iev < nevents; iev++){
1772 const Event* event = GetTestingEvent(iev);
1773
1774 if (event->GetClass() == signalClassNr) {tmp=tmpS;}
1775 else {tmp=tmpB;}
1776 tmp->Fill(PrivateGetMvaValue(event),event->GetWeight());
1777 }
1778 Double_t max=1;
1779
1780 std::vector<TH1F*> hS;
1781 std::vector<TH1F*> hB;
1782 for (UInt_t ivar=0; ivar<GetNvar(); ivar++){
1783 hS.push_back(new TH1F(TString::Format("SigVar%dAtTree%d",ivar,iTree).Data(),TString::Format("SigVar%dAtTree%d",ivar,iTree).Data(),100,DataInfo().GetVariableInfo(ivar).GetMin(),DataInfo().GetVariableInfo(ivar).GetMax()));
1784 hB.push_back(new TH1F(TString::Format("BkgVar%dAtTree%d",ivar,iTree).Data(),TString::Format("BkgVar%dAtTree%d",ivar,iTree).Data(),100,DataInfo().GetVariableInfo(ivar).GetMin(),DataInfo().GetVariableInfo(ivar).GetMax()));
1785 results->Store(hS.back(),hS.back()->GetTitle());
1786 results->Store(hB.back(),hB.back()->GetTitle());
1787 }
1788
1789
1790 for (UInt_t iev=0; iev < fEventSample.size(); iev++){
1791 if (fEventSample[iev]->GetBoostWeight() > max) max = 1.01*fEventSample[iev]->GetBoostWeight();
1792 }
1793 TH1F *tmpBoostWeightsS = new TH1F(TString::Format("BoostWeightsInTreeS%d",iTree).Data(),TString::Format("BoostWeightsInTreeS%d",iTree).Data(),100,0.,max);
1794 TH1F *tmpBoostWeightsB = new TH1F(TString::Format("BoostWeightsInTreeB%d",iTree).Data(),TString::Format("BoostWeightsInTreeB%d",iTree).Data(),100,0.,max);
1795 results->Store(tmpBoostWeightsS,tmpBoostWeightsS->GetTitle());
1796 results->Store(tmpBoostWeightsB,tmpBoostWeightsB->GetTitle());
1797
1798 TH1F *tmpBoostWeights;
1799 std::vector<TH1F*> *h;
1800
1801 for (UInt_t iev=0; iev < fEventSample.size(); iev++){
1802 if (fEventSample[iev]->GetClass() == signalClassNr) {
1803 tmpBoostWeights=tmpBoostWeightsS;
1804 h=&hS;
1805 }else{
1806 tmpBoostWeights=tmpBoostWeightsB;
1807 h=&hB;
1808 }
1809 tmpBoostWeights->Fill(fEventSample[iev]->GetBoostWeight());
1810 for (UInt_t ivar=0; ivar<GetNvar(); ivar++){
1811 (*h)[ivar]->Fill(fEventSample[iev]->GetValue(ivar),fEventSample[iev]->GetWeight());
1812 }
1813 }
1814
1815
1816 TMVA::PDF *sig = new TMVA::PDF( " PDF Sig", tmpS, TMVA::PDF::kSpline3 );
1817 TMVA::PDF *bkg = new TMVA::PDF( " PDF Bkg", tmpB, TMVA::PDF::kSpline3 );
1818
1819
1820 TGraph* gr=results->GetGraph("BoostMonitorGraph");
1821 Int_t nPoints = gr->GetN();
1822 gr->Set(nPoints+1);
1823 gr->SetPoint(nPoints,(Double_t)iTree+1,GetROCIntegral(sig,bkg));
1824
1825 tmpS->Delete();
1826 tmpB->Delete();
1827
1828 delete sig;
1829 delete bkg;
1830
1831 return;
1832}
1833
1834////////////////////////////////////////////////////////////////////////////////
1835/// The AdaBoost implementation.
1836/// a new training sample is generated by weighting
1837/// events that are misclassified by the decision tree. The weight
1838/// applied is \f$ w = \frac{(1-err)}{err} \f$ or more general:
1839/// \f$ w = (\frac{(1-err)}{err})^\beta \f$
1840/// where \f$err\f$ is the fraction of misclassified events in the tree ( <0.5 assuming
1841/// demanding the that previous selection was better than random guessing)
1842/// and "beta" being a free parameter (standard: beta = 1) that modifies the
1843/// boosting.
1844
1845Double_t TMVA::MethodBDT::AdaBoost( std::vector<const TMVA::Event*>& eventSample, DecisionTree *dt )
1846{
1847 Double_t err=0, sumGlobalw=0, sumGlobalwfalse=0, sumGlobalwfalse2=0;
1848
1849 std::vector<Double_t> sumw(DataInfo().GetNClasses(),0); //for individually re-scaling each class
1850
1851 Double_t maxDev=0;
1852 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
1853 Double_t w = (*e)->GetWeight();
1854 sumGlobalw += w;
1855 UInt_t iclass=(*e)->GetClass();
1856 sumw[iclass] += w;
1857
1858 if ( DoRegression() ) {
1859 Double_t tmpDev = TMath::Abs(dt->CheckEvent(*e,kFALSE) - (*e)->GetTarget(0) );
1860 sumGlobalwfalse += w * tmpDev;
1861 sumGlobalwfalse2 += w * tmpDev*tmpDev;
1862 if (tmpDev > maxDev) maxDev = tmpDev;
1863 }else{
1864
1865 if (fUseYesNoLeaf){
1866 Bool_t isSignalType = (dt->CheckEvent(*e,fUseYesNoLeaf) > fNodePurityLimit );
1867 if (!(isSignalType == DataInfo().IsSignal(*e))) {
1868 sumGlobalwfalse+= w;
1869 }
1870 }else{
1871 Double_t dtoutput = (dt->CheckEvent(*e,fUseYesNoLeaf) - 0.5)*2.;
1872 Int_t trueType;
1873 if (DataInfo().IsSignal(*e)) trueType = 1;
1874 else trueType = -1;
1875 sumGlobalwfalse+= w*trueType*dtoutput;
1876 }
1877 }
1878 }
1879
1880 err = sumGlobalwfalse/sumGlobalw ;
1881 if ( DoRegression() ) {
1882 //if quadratic loss:
1883 if (fAdaBoostR2Loss=="linear"){
1884 err = sumGlobalwfalse/maxDev/sumGlobalw ;
1885 }
1886 else if (fAdaBoostR2Loss=="quadratic"){
1887 err = sumGlobalwfalse2/maxDev/maxDev/sumGlobalw ;
1888 }
1889 else if (fAdaBoostR2Loss=="exponential"){
1890 err = 0;
1891 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
1892 Double_t w = (*e)->GetWeight();
1893 Double_t tmpDev = TMath::Abs(dt->CheckEvent(*e,kFALSE) - (*e)->GetTarget(0) );
1894 err += w * (1 - exp (-tmpDev/maxDev)) / sumGlobalw;
1895 }
1896
1897 }
1898 else {
1899 Log() << kFATAL << " you've chosen a Loss type for Adaboost other than linear, quadratic or exponential "
1900 << " namely " << fAdaBoostR2Loss << "\n"
1901 << "and this is not implemented... a typo in the options ??" <<Endl;
1902 }
1903 }
1904
1905 Log() << kDEBUG << "BDT AdaBoos wrong/all: " << sumGlobalwfalse << "/" << sumGlobalw << Endl;
1906
1907
1908 Double_t newSumGlobalw=0;
1909 std::vector<Double_t> newSumw(sumw.size(),0);
1910
1911 Double_t boostWeight=1.;
1912 if (err >= 0.5 && fUseYesNoLeaf) { // sanity check ... should never happen as otherwise there is apparently
1913 // something odd with the assignment of the leaf nodes (rem: you use the training
1914 // events for this determination of the error rate)
1915 if (dt->GetNNodes() == 1){
1916 Log() << kERROR << " YOUR tree has only 1 Node... kind of a funny *tree*. I cannot "
1917 << "boost such a thing... if after 1 step the error rate is == 0.5"
1918 << Endl
1919 << "please check why this happens, maybe too many events per node requested ?"
1920 << Endl;
1921
1922 }else{
1923 Log() << kERROR << " The error rate in the BDT boosting is > 0.5. ("<< err
1924 << ") That should not happen, please check your code (i.e... the BDT code), I "
1925 << " stop boosting here" << Endl;
1926 return -1;
1927 }
1928 err = 0.5;
1929 } else if (err < 0) {
1930 Log() << kERROR << " The error rate in the BDT boosting is < 0. That can happen"
1931 << " due to improper treatment of negative weights in a Monte Carlo.. (if you have"
1932 << " an idea on how to do it in a better way, please let me know (Helge.Voss@cern.ch)"
1933 << " for the time being I set it to its absolute value.. just to continue.." << Endl;
1934 err = TMath::Abs(err);
1935 }
1936 if (fUseYesNoLeaf)
1937 boostWeight = TMath::Log((1.-err)/err)*fAdaBoostBeta;
1938 else
1939 boostWeight = TMath::Log((1.+err)/(1-err))*fAdaBoostBeta;
1940
1941
1942 Log() << kDEBUG << "BDT AdaBoos wrong/all: " << sumGlobalwfalse << "/" << sumGlobalw << " 1-err/err="<<boostWeight<< " log.."<<TMath::Log(boostWeight)<<Endl;
1943
1944 Results* results = Data()->GetResults(GetMethodName(),Types::kTraining, Types::kMaxAnalysisType);
1945
1946
1947 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
1948
1949 if (fUseYesNoLeaf||DoRegression()){
1950 if ((!( (dt->CheckEvent(*e,fUseYesNoLeaf) > fNodePurityLimit ) == DataInfo().IsSignal(*e))) || DoRegression()) {
1951 Double_t boostfactor = TMath::Exp(boostWeight);
1952
1953 if (DoRegression()) boostfactor = TMath::Power(1/boostWeight,(1.-TMath::Abs(dt->CheckEvent(*e,kFALSE) - (*e)->GetTarget(0) )/maxDev ) );
1954 if ( (*e)->GetWeight() > 0 ){
1955 (*e)->SetBoostWeight( (*e)->GetBoostWeight() * boostfactor);
1956 // Helge change back (*e)->ScaleBoostWeight(boostfactor);
1957 if (DoRegression()) results->GetHist("BoostWeights")->Fill(boostfactor);
1958 } else {
1959 if ( fInverseBoostNegWeights )(*e)->ScaleBoostWeight( 1. / boostfactor); // if the original event weight is negative, and you want to "increase" the events "positive" influence, you'd rather make the event weight "smaller" in terms of it's absolute value while still keeping it something "negative"
1960 else (*e)->SetBoostWeight( (*e)->GetBoostWeight() * boostfactor);
1961
1962 }
1963 }
1964
1965 }else{
1966 Double_t dtoutput = (dt->CheckEvent(*e,fUseYesNoLeaf) - 0.5)*2.;
1967 Int_t trueType;
1968 if (DataInfo().IsSignal(*e)) trueType = 1;
1969 else trueType = -1;
1970 Double_t boostfactor = TMath::Exp(-1*boostWeight*trueType*dtoutput);
1971
1972 if ( (*e)->GetWeight() > 0 ){
1973 (*e)->SetBoostWeight( (*e)->GetBoostWeight() * boostfactor);
1974 // Helge change back (*e)->ScaleBoostWeight(boostfactor);
1975 if (DoRegression()) results->GetHist("BoostWeights")->Fill(boostfactor);
1976 } else {
1977 if ( fInverseBoostNegWeights )(*e)->ScaleBoostWeight( 1. / boostfactor); // if the original event weight is negative, and you want to "increase" the events "positive" influence, you'd rather make the event weight "smaller" in terms of it's absolute value while still keeping it something "negative"
1978 else (*e)->SetBoostWeight( (*e)->GetBoostWeight() * boostfactor);
1979 }
1980 }
1981 newSumGlobalw+=(*e)->GetWeight();
1982 newSumw[(*e)->GetClass()] += (*e)->GetWeight();
1983 }
1984
1985
1986 // Double_t globalNormWeight=sumGlobalw/newSumGlobalw;
1987 Double_t globalNormWeight=( (Double_t) eventSample.size())/newSumGlobalw;
1988 Log() << kDEBUG << "new Nsig="<<newSumw[0]*globalNormWeight << " new Nbkg="<<newSumw[1]*globalNormWeight << Endl;
1989
1990
1991 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
1992 // if (fRenormByClass) (*e)->ScaleBoostWeight( normWeightByClass[(*e)->GetClass()] );
1993 // else (*e)->ScaleBoostWeight( globalNormWeight );
1994 // else (*e)->ScaleBoostWeight( globalNormWeight );
1995 if (DataInfo().IsSignal(*e))(*e)->ScaleBoostWeight( globalNormWeight * fSigToBkgFraction );
1996 else (*e)->ScaleBoostWeight( globalNormWeight );
1997 }
1998
1999 if (!(DoRegression()))results->GetHist("BoostWeights")->Fill(boostWeight);
2000 results->GetHist("BoostWeightsVsTree")->SetBinContent(fForest.size(),boostWeight);
2001 results->GetHist("ErrorFrac")->SetBinContent(fForest.size(),err);
2002
2003 fBoostWeight = boostWeight;
2004 fErrorFraction = err;
2005
2006 return boostWeight;
2007}
2008
2009////////////////////////////////////////////////////////////////////////////////
2010/// The AdaCost boosting algorithm takes a simple cost Matrix (currently fixed for
2011/// all events... later could be modified to use individual cost matrices for each
2012/// events as in the original paper...
2013///
2014/// true_signal true_bkg
2015/// ----------------------------------
2016/// sel_signal | Css Ctb_ss Cxx.. in the range [0,1]
2017/// sel_bkg | Cts_sb Cbb
2018///
2019/// and takes this into account when calculating the mis class. cost (former: error fraction):
2020///
2021/// err = sum_events ( weight* y_true*y_sel * beta(event)
2022
2023Double_t TMVA::MethodBDT::AdaCost( vector<const TMVA::Event*>& eventSample, DecisionTree *dt )
2024{
2025 Double_t Css = fCss;
2026 Double_t Cbb = fCbb;
2027 Double_t Cts_sb = fCts_sb;
2028 Double_t Ctb_ss = fCtb_ss;
2029
2030 Double_t err=0, sumGlobalWeights=0, sumGlobalCost=0;
2031
2032 std::vector<Double_t> sumw(DataInfo().GetNClasses(),0); //for individually re-scaling each class
2033
2034 for (vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
2035 Double_t w = (*e)->GetWeight();
2036 sumGlobalWeights += w;
2037 UInt_t iclass=(*e)->GetClass();
2038
2039 sumw[iclass] += w;
2040
2041 if ( DoRegression() ) {
2042 Log() << kFATAL << " AdaCost not implemented for regression"<<Endl;
2043 }else{
2044
2045 Double_t dtoutput = (dt->CheckEvent(*e,false) - 0.5)*2.;
2046 Int_t trueType;
2047 Bool_t isTrueSignal = DataInfo().IsSignal(*e);
2048 Bool_t isSelectedSignal = (dtoutput>0);
2049 if (isTrueSignal) trueType = 1;
2050 else trueType = -1;
2051
2052 Double_t cost=0;
2053 if (isTrueSignal && isSelectedSignal) cost=Css;
2054 else if (isTrueSignal && !isSelectedSignal) cost=Cts_sb;
2055 else if (!isTrueSignal && isSelectedSignal) cost=Ctb_ss;
2056 else if (!isTrueSignal && !isSelectedSignal) cost=Cbb;
2057 else Log() << kERROR << "something went wrong in AdaCost" << Endl;
2058
2059 sumGlobalCost+= w*trueType*dtoutput*cost;
2060
2061 }
2062 }
2063
2064 if ( DoRegression() ) {
2065 Log() << kFATAL << " AdaCost not implemented for regression"<<Endl;
2066 }
2067
2068 // Log() << kDEBUG << "BDT AdaBoos wrong/all: " << sumGlobalCost << "/" << sumGlobalWeights << Endl;
2069 // Log() << kWARNING << "BDT AdaBoos wrong/all: " << sumGlobalCost << "/" << sumGlobalWeights << Endl;
2070 sumGlobalCost /= sumGlobalWeights;
2071 // Log() << kWARNING << "BDT AdaBoos wrong/all: " << sumGlobalCost << "/" << sumGlobalWeights << Endl;
2072
2073
2074 Double_t newSumGlobalWeights=0;
2075 vector<Double_t> newSumClassWeights(sumw.size(),0);
2076
2077 Double_t boostWeight = TMath::Log((1+sumGlobalCost)/(1-sumGlobalCost)) * fAdaBoostBeta;
2078
2079 Results* results = Data()->GetResults(GetMethodName(),Types::kTraining, Types::kMaxAnalysisType);
2080
2081 for (vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
2082 Double_t dtoutput = (dt->CheckEvent(*e,false) - 0.5)*2.;
2083 Int_t trueType;
2084 Bool_t isTrueSignal = DataInfo().IsSignal(*e);
2085 Bool_t isSelectedSignal = (dtoutput>0);
2086 if (isTrueSignal) trueType = 1;
2087 else trueType = -1;
2088
2089 Double_t cost=0;
2090 if (isTrueSignal && isSelectedSignal) cost=Css;
2091 else if (isTrueSignal && !isSelectedSignal) cost=Cts_sb;
2092 else if (!isTrueSignal && isSelectedSignal) cost=Ctb_ss;
2093 else if (!isTrueSignal && !isSelectedSignal) cost=Cbb;
2094 else Log() << kERROR << "something went wrong in AdaCost" << Endl;
2095
2096 Double_t boostfactor = TMath::Exp(-1*boostWeight*trueType*dtoutput*cost);
2097 if (DoRegression())Log() << kFATAL << " AdaCost not implemented for regression"<<Endl;
2098 if ( (*e)->GetWeight() > 0 ){
2099 (*e)->SetBoostWeight( (*e)->GetBoostWeight() * boostfactor);
2100 // Helge change back (*e)->ScaleBoostWeight(boostfactor);
2101 if (DoRegression())Log() << kFATAL << " AdaCost not implemented for regression"<<Endl;
2102 } else {
2103 if ( fInverseBoostNegWeights )(*e)->ScaleBoostWeight( 1. / boostfactor); // if the original event weight is negative, and you want to "increase" the events "positive" influence, you'd rather make the event weight "smaller" in terms of it's absolute value while still keeping it something "negative"
2104 }
2105
2106 newSumGlobalWeights+=(*e)->GetWeight();
2107 newSumClassWeights[(*e)->GetClass()] += (*e)->GetWeight();
2108 }
2109
2110
2111 // Double_t globalNormWeight=sumGlobalWeights/newSumGlobalWeights;
2112 Double_t globalNormWeight=Double_t(eventSample.size())/newSumGlobalWeights;
2113 Log() << kDEBUG << "new Nsig="<<newSumClassWeights[0]*globalNormWeight << " new Nbkg="<<newSumClassWeights[1]*globalNormWeight << Endl;
2114
2115
2116 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
2117 // if (fRenormByClass) (*e)->ScaleBoostWeight( normWeightByClass[(*e)->GetClass()] );
2118 // else (*e)->ScaleBoostWeight( globalNormWeight );
2119 if (DataInfo().IsSignal(*e))(*e)->ScaleBoostWeight( globalNormWeight * fSigToBkgFraction );
2120 else (*e)->ScaleBoostWeight( globalNormWeight );
2121 }
2122
2123
2124 if (!(DoRegression()))results->GetHist("BoostWeights")->Fill(boostWeight);
2125 results->GetHist("BoostWeightsVsTree")->SetBinContent(fForest.size(),boostWeight);
2126 results->GetHist("ErrorFrac")->SetBinContent(fForest.size(),err);
2127
2128 fBoostWeight = boostWeight;
2129 fErrorFraction = err;
2130
2131
2132 return boostWeight;
2133}
2134
2135////////////////////////////////////////////////////////////////////////////////
2136/// Call it boot-strapping, re-sampling or whatever you like, in the end it is nothing
2137/// else but applying "random" poisson weights to each event.
2138
2140{
2141 // this is now done in "MethodBDT::Boost as it might be used by other boost methods, too
2142 // GetBaggedSample(eventSample);
2143
2144 return 1.; //here as there are random weights for each event, just return a constant==1;
2145}
2146
2147////////////////////////////////////////////////////////////////////////////////
2148/// Fills fEventSample with fBaggedSampleFraction*NEvents random training events.
2149
2150void TMVA::MethodBDT::GetBaggedSubSample(std::vector<const TMVA::Event*>& eventSample)
2151{
2152
2153 Double_t n;
2154 TRandom3 *trandom = new TRandom3(100*fForest.size()+1234);
2155
2156 if (!fSubSample.empty()) fSubSample.clear();
2157
2158 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
2159 n = trandom->PoissonD(fBaggedSampleFraction);
2160 for (Int_t i=0;i<n;i++) fSubSample.push_back(*e);
2161 }
2162
2163 delete trandom;
2164 return;
2165
2166 /*
2167 UInt_t nevents = fEventSample.size();
2168
2169 if (!fSubSample.empty()) fSubSample.clear();
2170 TRandom3 *trandom = new TRandom3(fForest.size()+1);
2171
2172 for (UInt_t ievt=0; ievt<nevents; ievt++) { // recreate new random subsample
2173 if(trandom->Rndm()<fBaggedSampleFraction)
2174 fSubSample.push_back(fEventSample[ievt]);
2175 }
2176 delete trandom;
2177 */
2178
2179}
2180
2181////////////////////////////////////////////////////////////////////////////////
2182/// A special boosting only for Regression (not implemented).
2183
2184Double_t TMVA::MethodBDT::RegBoost( std::vector<const TMVA::Event*>& /* eventSample */, DecisionTree* /* dt */ )
2185{
2186 return 1;
2187}
2188
2189////////////////////////////////////////////////////////////////////////////////
2190/// Adaption of the AdaBoost to regression problems (see H.Drucker 1997).
2191
2192Double_t TMVA::MethodBDT::AdaBoostR2( std::vector<const TMVA::Event*>& eventSample, DecisionTree *dt )
2193{
2194 if ( !DoRegression() ) Log() << kFATAL << "Somehow you chose a regression boost method for a classification job" << Endl;
2195
2196 Double_t err=0, sumw=0, sumwfalse=0, sumwfalse2=0;
2197 Double_t maxDev=0;
2198 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
2199 Double_t w = (*e)->GetWeight();
2200 sumw += w;
2201
2202 Double_t tmpDev = TMath::Abs(dt->CheckEvent(*e,kFALSE) - (*e)->GetTarget(0) );
2203 sumwfalse += w * tmpDev;
2204 sumwfalse2 += w * tmpDev*tmpDev;
2205 if (tmpDev > maxDev) maxDev = tmpDev;
2206 }
2207
2208 //if quadratic loss:
2209 if (fAdaBoostR2Loss=="linear"){
2210 err = sumwfalse/maxDev/sumw ;
2211 }
2212 else if (fAdaBoostR2Loss=="quadratic"){
2213 err = sumwfalse2/maxDev/maxDev/sumw ;
2214 }
2215 else if (fAdaBoostR2Loss=="exponential"){
2216 err = 0;
2217 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
2218 Double_t w = (*e)->GetWeight();
2219 Double_t tmpDev = TMath::Abs(dt->CheckEvent(*e,kFALSE) - (*e)->GetTarget(0) );
2220 err += w * (1 - exp (-tmpDev/maxDev)) / sumw;
2221 }
2222
2223 }
2224 else {
2225 Log() << kFATAL << " you've chosen a Loss type for Adaboost other than linear, quadratic or exponential "
2226 << " namely " << fAdaBoostR2Loss << "\n"
2227 << "and this is not implemented... a typo in the options ??" <<Endl;
2228 }
2229
2230
2231 if (err >= 0.5) { // sanity check ... should never happen as otherwise there is apparently
2232 // something odd with the assignment of the leaf nodes (rem: you use the training
2233 // events for this determination of the error rate)
2234 if (dt->GetNNodes() == 1){
2235 Log() << kERROR << " YOUR tree has only 1 Node... kind of a funny *tree*. I cannot "
2236 << "boost such a thing... if after 1 step the error rate is == 0.5"
2237 << Endl
2238 << "please check why this happens, maybe too many events per node requested ?"
2239 << Endl;
2240
2241 }else{
2242 Log() << kERROR << " The error rate in the BDT boosting is > 0.5. ("<< err
2243 << ") That should not happen, but is possible for regression trees, and"
2244 << " should trigger a stop for the boosting. please check your code (i.e... the BDT code), I "
2245 << " stop boosting " << Endl;
2246 return -1;
2247 }
2248 err = 0.5;
2249 } else if (err < 0) {
2250 Log() << kERROR << " The error rate in the BDT boosting is < 0. That can happen"
2251 << " due to improper treatment of negative weights in a Monte Carlo.. (if you have"
2252 << " an idea on how to do it in a better way, please let me know (Helge.Voss@cern.ch)"
2253 << " for the time being I set it to its absolute value.. just to continue.." << Endl;
2254 err = TMath::Abs(err);
2255 }
2256
2257 Double_t boostWeight = err / (1.-err);
2258 Double_t newSumw=0;
2259
2260 Results* results = Data()->GetResults(GetMethodName(), Types::kTraining, Types::kMaxAnalysisType);
2261
2262 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
2263 Double_t boostfactor = TMath::Power(boostWeight,(1.-TMath::Abs(dt->CheckEvent(*e,kFALSE) - (*e)->GetTarget(0) )/maxDev ) );
2264 results->GetHist("BoostWeights")->Fill(boostfactor);
2265 // std::cout << "R2 " << boostfactor << " " << boostWeight << " " << (1.-TMath::Abs(dt->CheckEvent(*e,kFALSE) - (*e)->GetTarget(0) )/maxDev) << std::endl;
2266 if ( (*e)->GetWeight() > 0 ){
2267 Float_t newBoostWeight = (*e)->GetBoostWeight() * boostfactor;
2268 Float_t newWeight = (*e)->GetWeight() * (*e)->GetBoostWeight() * boostfactor;
2269 if (newWeight == 0) {
2270 Log() << kINFO << "Weight= " << (*e)->GetWeight() << Endl;
2271 Log() << kINFO << "BoostWeight= " << (*e)->GetBoostWeight() << Endl;
2272 Log() << kINFO << "boostweight="<<boostWeight << " err= " <<err << Endl;
2273 Log() << kINFO << "NewBoostWeight= " << newBoostWeight << Endl;
2274 Log() << kINFO << "boostfactor= " << boostfactor << Endl;
2275 Log() << kINFO << "maxDev = " << maxDev << Endl;
2276 Log() << kINFO << "tmpDev = " << TMath::Abs(dt->CheckEvent(*e,kFALSE) - (*e)->GetTarget(0) ) << Endl;
2277 Log() << kINFO << "target = " << (*e)->GetTarget(0) << Endl;
2278 Log() << kINFO << "estimate = " << dt->CheckEvent(*e,kFALSE) << Endl;
2279 }
2280 (*e)->SetBoostWeight( newBoostWeight );
2281 // (*e)->SetBoostWeight( (*e)->GetBoostWeight() * boostfactor);
2282 } else {
2283 (*e)->SetBoostWeight( (*e)->GetBoostWeight() / boostfactor);
2284 }
2285 newSumw+=(*e)->GetWeight();
2286 }
2287
2288 // re-normalise the weights
2289 Double_t normWeight = sumw / newSumw;
2290 for (std::vector<const TMVA::Event*>::const_iterator e=eventSample.begin(); e!=eventSample.end();++e) {
2291 //Helge (*e)->ScaleBoostWeight( sumw/newSumw);
2292 // (*e)->ScaleBoostWeight( normWeight);
2293 (*e)->SetBoostWeight( (*e)->GetBoostWeight() * normWeight );
2294 }
2295
2296
2297 results->GetHist("BoostWeightsVsTree")->SetBinContent(fForest.size(),1./boostWeight);
2298 results->GetHist("ErrorFrac")->SetBinContent(fForest.size(),err);
2299
2300 fBoostWeight = boostWeight;
2301 fErrorFraction = err;
2302
2303 return TMath::Log(1./boostWeight);
2304}
2305
2306////////////////////////////////////////////////////////////////////////////////
2307/// Write weights to XML.
2308
2309void TMVA::MethodBDT::AddWeightsXMLTo( void* parent ) const
2310{
2311 void* wght = gTools().AddChild(parent, "Weights");
2312
2313 if (fDoPreselection){
2314 for (UInt_t ivar=0; ivar<GetNvar(); ivar++){
2315 gTools().AddAttr( wght, TString::Format("PreselectionLowBkgVar%d",ivar).Data(), fIsLowBkgCut[ivar]);
2316 gTools().AddAttr( wght, TString::Format("PreselectionLowBkgVar%dValue",ivar).Data(), fLowBkgCut[ivar]);
2317 gTools().AddAttr( wght, TString::Format("PreselectionLowSigVar%d",ivar).Data(), fIsLowSigCut[ivar]);
2318 gTools().AddAttr( wght, TString::Format("PreselectionLowSigVar%dValue",ivar).Data(), fLowSigCut[ivar]);
2319 gTools().AddAttr( wght, TString::Format("PreselectionHighBkgVar%d",ivar).Data(), fIsHighBkgCut[ivar]);
2320 gTools().AddAttr( wght, TString::Format("PreselectionHighBkgVar%dValue",ivar).Data(),fHighBkgCut[ivar]);
2321 gTools().AddAttr( wght, TString::Format("PreselectionHighSigVar%d",ivar).Data(), fIsHighSigCut[ivar]);
2322 gTools().AddAttr( wght, TString::Format("PreselectionHighSigVar%dValue",ivar).Data(),fHighSigCut[ivar]);
2323 }
2324 }
2325
2326
2327 gTools().AddAttr( wght, "NTrees", fForest.size() );
2328 gTools().AddAttr( wght, "AnalysisType", fForest.back()->GetAnalysisType() );
2329
2330 for (UInt_t i=0; i< fForest.size(); i++) {
2331 void* trxml = fForest[i]->AddXMLTo(wght);
2332 gTools().AddAttr( trxml, "boostWeight", fBoostWeights[i] );
2333 gTools().AddAttr( trxml, "itree", i );
2334 }
2335}
2336
2337////////////////////////////////////////////////////////////////////////////////
2338/// Reads the BDT from the xml file.
2339
2341 UInt_t i;
2342 for (i=0; i<fForest.size(); i++) delete fForest[i];
2343 fForest.clear();
2344 fBoostWeights.clear();
2345
2346 UInt_t ntrees;
2347 UInt_t analysisType;
2348 Float_t boostWeight;
2349
2350
2351 if (gTools().HasAttr( parent, TString::Format("PreselectionLowBkgVar%d",0).Data())) {
2352 fIsLowBkgCut.resize(GetNvar());
2353 fLowBkgCut.resize(GetNvar());
2354 fIsLowSigCut.resize(GetNvar());
2355 fLowSigCut.resize(GetNvar());
2356 fIsHighBkgCut.resize(GetNvar());
2357 fHighBkgCut.resize(GetNvar());
2358 fIsHighSigCut.resize(GetNvar());
2359 fHighSigCut.resize(GetNvar());
2360
2361 Bool_t tmpBool;
2362 Double_t tmpDouble;
2363 for (UInt_t ivar=0; ivar<GetNvar(); ivar++){
2364 gTools().ReadAttr( parent, TString::Format("PreselectionLowBkgVar%d",ivar).Data(), tmpBool);
2365 fIsLowBkgCut[ivar]=tmpBool;
2366 gTools().ReadAttr( parent, TString::Format("PreselectionLowBkgVar%dValue",ivar).Data(), tmpDouble);
2367 fLowBkgCut[ivar]=tmpDouble;
2368 gTools().ReadAttr( parent, TString::Format("PreselectionLowSigVar%d",ivar).Data(), tmpBool);
2369 fIsLowSigCut[ivar]=tmpBool;
2370 gTools().ReadAttr( parent, TString::Format("PreselectionLowSigVar%dValue",ivar).Data(), tmpDouble);
2371 fLowSigCut[ivar]=tmpDouble;
2372 gTools().ReadAttr( parent, TString::Format("PreselectionHighBkgVar%d",ivar).Data(), tmpBool);
2373 fIsHighBkgCut[ivar]=tmpBool;
2374 gTools().ReadAttr( parent, TString::Format("PreselectionHighBkgVar%dValue",ivar).Data(), tmpDouble);
2375 fHighBkgCut[ivar]=tmpDouble;
2376 gTools().ReadAttr( parent, TString::Format("PreselectionHighSigVar%d",ivar).Data(),tmpBool);
2377 fIsHighSigCut[ivar]=tmpBool;
2378 gTools().ReadAttr( parent, TString::Format("PreselectionHighSigVar%dValue",ivar).Data(), tmpDouble);
2379 fHighSigCut[ivar]=tmpDouble;
2380 }
2381 }
2382
2383
2384 gTools().ReadAttr( parent, "NTrees", ntrees );
2385
2386 if(gTools().HasAttr(parent, "TreeType")) { // pre 4.1.0 version
2387 gTools().ReadAttr( parent, "TreeType", analysisType );
2388 } else { // from 4.1.0 onwards
2389 gTools().ReadAttr( parent, "AnalysisType", analysisType );
2390 }
2391
2392 void* ch = gTools().GetChild(parent);
2393 i=0;
2394 while(ch) {
2395 fForest.push_back( dynamic_cast<DecisionTree*>( DecisionTree::CreateFromXML(ch, GetTrainingTMVAVersionCode()) ) );
2396 fForest.back()->SetAnalysisType(Types::EAnalysisType(analysisType));
2397 fForest.back()->SetTreeID(i++);
2398 gTools().ReadAttr(ch,"boostWeight",boostWeight);
2399 fBoostWeights.push_back(boostWeight);
2400 ch = gTools().GetNextChild(ch);
2401 }
2402}
2403
2404////////////////////////////////////////////////////////////////////////////////
2405/// Read the weights (BDT coefficients).
2406
2408{
2409 TString dummy;
2410 // Types::EAnalysisType analysisType;
2411 Int_t analysisType(0);
2412
2413 // coverity[tainted_data_argument]
2414 istr >> dummy >> fNTrees;
2415 Log() << kINFO << "Read " << fNTrees << " Decision trees" << Endl;
2416
2417 for (UInt_t i=0;i<fForest.size();i++) delete fForest[i];
2418 fForest.clear();
2419 fBoostWeights.clear();
2420 Int_t iTree;
2421 Double_t boostWeight;
2422 for (int i=0;i<fNTrees;i++) {
2423 istr >> dummy >> iTree >> dummy >> boostWeight;
2424 if (iTree != i) {
2425 fForest.back()->Print( std::cout );
2426 Log() << kFATAL << "Error while reading weight file; mismatch iTree="
2427 << iTree << " i=" << i
2428 << " dummy " << dummy
2429 << " boostweight " << boostWeight
2430 << Endl;
2431 }
2432 fForest.push_back( new DecisionTree() );
2433 fForest.back()->SetAnalysisType(Types::EAnalysisType(analysisType));
2434 fForest.back()->SetTreeID(i);
2435 fForest.back()->Read(istr, GetTrainingTMVAVersionCode());
2436 fBoostWeights.push_back(boostWeight);
2437 }
2438}
2439
2440////////////////////////////////////////////////////////////////////////////////
2441
2443 return this->GetMvaValue( err, errUpper, 0 );
2444}
2445
2446////////////////////////////////////////////////////////////////////////////////
2447/// Return the MVA value (range [-1;1]) that classifies the
2448/// event according to the majority vote from the total number of
2449/// decision trees.
2450
2452{
2453 const Event* ev = GetEvent();
2454 if (fDoPreselection) {
2455 Double_t val = ApplyPreselectionCuts(ev);
2456 if (TMath::Abs(val)>0.05) return val;
2457 }
2458 return PrivateGetMvaValue(ev, err, errUpper, useNTrees);
2459
2460}
2461
2462////////////////////////////////////////////////////////////////////////////////
2463/// Return the MVA value (range [-1;1]) that classifies the
2464/// event according to the majority vote from the total number of
2465/// decision trees.
2466
2468{
2469 // cannot determine error
2470 NoErrorCalc(err, errUpper);
2471
2472 // allow for the possibility to use less trees in the actual MVA calculation
2473 // than have been originally trained.
2474 UInt_t nTrees = fForest.size();
2475
2476 if (useNTrees > 0 ) nTrees = useNTrees;
2477
2478 if (fBoostType=="Grad") return GetGradBoostMVA(ev,nTrees);
2479
2480 Double_t myMVA = 0;
2481 Double_t norm = 0;
2482 for (UInt_t itree=0; itree<nTrees; itree++) {
2483 //
2484 myMVA += fBoostWeights[itree] * fForest[itree]->CheckEvent(ev,fUseYesNoLeaf);
2485 norm += fBoostWeights[itree];
2486 }
2487 return ( norm > std::numeric_limits<double>::epsilon() ) ? myMVA /= norm : 0 ;
2488}
2489
2490
2491////////////////////////////////////////////////////////////////////////////////
2492/// Get the multiclass MVA response for the BDT classifier.
2493
2494const std::vector<Float_t>& TMVA::MethodBDT::GetMulticlassValues()
2495{
2496 const TMVA::Event *e = GetEvent();
2497 if (fMulticlassReturnVal == NULL) fMulticlassReturnVal = new std::vector<Float_t>();
2498 fMulticlassReturnVal->clear();
2499
2500 UInt_t nClasses = DataInfo().GetNClasses();
2501 std::vector<Double_t> temp(nClasses);
2502 auto forestSize = fForest.size();
2503
2504 #ifdef R__USE_IMT
2505 std::vector<TMVA::DecisionTree *> forest = fForest;
2506 auto get_output = [&e, &forest, &temp, forestSize, nClasses](UInt_t iClass) {
2507 for (UInt_t itree = iClass; itree < forestSize; itree += nClasses) {
2508 temp[iClass] += forest[itree]->CheckEvent(e, kFALSE);
2509 }
2510 };
2511
2513 .Foreach(get_output, ROOT::TSeqU(nClasses));
2514 #else
2515 // trees 0, nClasses, 2*nClasses, ... belong to class 0
2516 // trees 1, nClasses+1, 2*nClasses+1, ... belong to class 1 and so forth
2517 UInt_t classOfTree = 0;
2518 for (UInt_t itree = 0; itree < forestSize; ++itree) {
2519 temp[classOfTree] += fForest[itree]->CheckEvent(e, kFALSE);
2520 if (++classOfTree == nClasses) classOfTree = 0; // cheap modulo
2521 }
2522 #endif
2523
2524 // we want to calculate sum of exp(temp[j] - temp[i]) for all i,j (i!=j)
2525 // first calculate exp(), then replace minus with division.
2526 std::transform(temp.begin(), temp.end(), temp.begin(), [](Double_t d){return exp(d);});
2527
2528 Double_t exp_sum = std::accumulate(temp.begin(), temp.end(), 0.0);
2529
2530 for (UInt_t i = 0; i < nClasses; i++) {
2531 Double_t p_cls = temp[i] / exp_sum;
2532 (*fMulticlassReturnVal).push_back(p_cls);
2533 }
2534
2535 return *fMulticlassReturnVal;
2536}
2537
2538////////////////////////////////////////////////////////////////////////////////
2539/// Get the regression value generated by the BDTs.
2540
2541const std::vector<Float_t> & TMVA::MethodBDT::GetRegressionValues()
2542{
2543
2544 if (fRegressionReturnVal == NULL) fRegressionReturnVal = new std::vector<Float_t>();
2545 fRegressionReturnVal->clear();
2546
2547 const Event * ev = GetEvent();
2548 Event * evT = new Event(*ev);
2549
2550 Double_t myMVA = 0;
2551 Double_t norm = 0;
2552 if (fBoostType=="AdaBoostR2") {
2553 // rather than using the weighted average of the tree respones in the forest
2554 // H.Decker(1997) proposed to use the "weighted median"
2555
2556 // sort all individual tree responses according to the prediction value
2557 // (keep the association to their tree weight)
2558 // the sum up all the associated weights (starting from the one whose tree
2559 // yielded the smalles response) up to the tree "t" at which you've
2560 // added enough tree weights to have more than half of the sum of all tree weights.
2561 // choose as response of the forest that one which belongs to this "t"
2562
2563 vector< Double_t > response(fForest.size());
2564 vector< Double_t > weight(fForest.size());
2565 Double_t totalSumOfWeights = 0;
2566
2567 for (UInt_t itree=0; itree<fForest.size(); itree++) {
2568 response[itree] = fForest[itree]->CheckEvent(ev,kFALSE);
2569 weight[itree] = fBoostWeights[itree];
2570 totalSumOfWeights += fBoostWeights[itree];
2571 }
2572
2573 std::vector< std::vector<Double_t> > vtemp;
2574 vtemp.push_back( response ); // this is the vector that will get sorted
2575 vtemp.push_back( weight );
2576 gTools().UsefulSortAscending( vtemp );
2577
2578 Int_t t=0;
2579 Double_t sumOfWeights = 0;
2580 while (sumOfWeights <= totalSumOfWeights/2.) {
2581 sumOfWeights += vtemp[1][t];
2582 t++;
2583 }
2584
2585 Double_t rVal=0;
2586 Int_t count=0;
2587 for (UInt_t i= TMath::Max(UInt_t(0),UInt_t(t-(fForest.size()/6)-0.5));
2588 i< TMath::Min(UInt_t(fForest.size()),UInt_t(t+(fForest.size()/6)+0.5)); i++) {
2589 count++;
2590 rVal+=vtemp[0][i];
2591 }
2592 // fRegressionReturnVal->push_back( rVal/Double_t(count));
2593 evT->SetTarget(0, rVal/Double_t(count) );
2594 }
2595 else if(fBoostType=="Grad"){
2596 for (UInt_t itree=0; itree<fForest.size(); itree++) {
2597 myMVA += fForest[itree]->CheckEvent(ev,kFALSE);
2598 }
2599 // fRegressionReturnVal->push_back( myMVA+fBoostWeights[0]);
2600 evT->SetTarget(0, myMVA+fBoostWeights[0] );
2601 }
2602 else{
2603 for (UInt_t itree=0; itree<fForest.size(); itree++) {
2604 //
2605 myMVA += fBoostWeights[itree] * fForest[itree]->CheckEvent(ev,kFALSE);
2606 norm += fBoostWeights[itree];
2607 }
2608 // fRegressionReturnVal->push_back( ( norm > std::numeric_limits<double>::epsilon() ) ? myMVA /= norm : 0 );
2609 evT->SetTarget(0, ( norm > std::numeric_limits<double>::epsilon() ) ? myMVA /= norm : 0 );
2610 }
2611
2612
2613
2614 const Event* evT2 = GetTransformationHandler().InverseTransform( evT );
2615 fRegressionReturnVal->push_back( evT2->GetTarget(0) );
2616
2617 delete evT;
2618
2619
2620 return *fRegressionReturnVal;
2621}
2622
2623////////////////////////////////////////////////////////////////////////////////
2624/// Here we could write some histograms created during the processing
2625/// to the output file.
2626
2628{
2629 Log() << kDEBUG << "\tWrite monitoring histograms to file: " << BaseDir()->GetPath() << Endl;
2630
2631 //Results* results = Data()->GetResults(GetMethodName(), Types::kTraining, Types::kMaxAnalysisType);
2632 //results->GetStorage()->Write();
2633 fMonitorNtuple->Write();
2634}
2635
2636////////////////////////////////////////////////////////////////////////////////
2637/// Return the relative variable importance, normalized to all
2638/// variables together having the importance 1. The importance in
2639/// evaluated as the total separation-gain that this variable had in
2640/// the decision trees (weighted by the number of events)
2641
2643{
2644 fVariableImportance.resize(GetNvar());
2645 for (UInt_t ivar = 0; ivar < GetNvar(); ivar++) {
2646 fVariableImportance[ivar]=0;
2647 }
2648 Double_t sum=0;
2649 for (UInt_t itree = 0; itree < GetNTrees(); itree++) {
2650 std::vector<Double_t> relativeImportance(fForest[itree]->GetVariableImportance());
2651 for (UInt_t i=0; i< relativeImportance.size(); i++) {
2652 fVariableImportance[i] += fBoostWeights[itree] * relativeImportance[i];
2653 }
2654 }
2655
2656 for (UInt_t ivar=0; ivar< fVariableImportance.size(); ivar++){
2657 fVariableImportance[ivar] = TMath::Sqrt(fVariableImportance[ivar]);
2658 sum += fVariableImportance[ivar];
2659 }
2660 for (UInt_t ivar=0; ivar< fVariableImportance.size(); ivar++) fVariableImportance[ivar] /= sum;
2661
2662 return fVariableImportance;
2663}
2664
2665////////////////////////////////////////////////////////////////////////////////
2666/// Returns the measure for the variable importance of variable "ivar"
2667/// which is later used in GetVariableImportance() to calculate the
2668/// relative variable importances.
2669
2671{
2672 std::vector<Double_t> relativeImportance = this->GetVariableImportance();
2673 if (ivar < (UInt_t)relativeImportance.size()) return relativeImportance[ivar];
2674 else Log() << kFATAL << "<GetVariableImportance> ivar = " << ivar << " is out of range " << Endl;
2675
2676 return -1;
2677}
2678
2679////////////////////////////////////////////////////////////////////////////////
2680/// Compute ranking of input variables
2681
2683{
2684 // create the ranking object
2685 fRanking = new Ranking( GetName(), "Variable Importance" );
2686 vector< Double_t> importance(this->GetVariableImportance());
2687
2688 for (UInt_t ivar=0; ivar<GetNvar(); ivar++) {
2689
2690 fRanking->AddRank( Rank( GetInputLabel(ivar), importance[ivar] ) );
2691 }
2692
2693 return fRanking;
2694}
2695
2696////////////////////////////////////////////////////////////////////////////////
2697/// Get help message text.
2698
2700{
2701 Log() << Endl;
2702 Log() << gTools().Color("bold") << "--- Short description:" << gTools().Color("reset") << Endl;
2703 Log() << Endl;
2704 Log() << "Boosted Decision Trees are a collection of individual decision" << Endl;
2705 Log() << "trees which form a multivariate classifier by (weighted) majority " << Endl;
2706 Log() << "vote of the individual trees. Consecutive decision trees are " << Endl;
2707 Log() << "trained using the original training data set with re-weighted " << Endl;
2708 Log() << "events. By default, the AdaBoost method is employed, which gives " << Endl;
2709 Log() << "events that were misclassified in the previous tree a larger " << Endl;
2710 Log() << "weight in the training of the following tree." << Endl;
2711 Log() << Endl;
2712 Log() << "Decision trees are a sequence of binary splits of the data sample" << Endl;
2713 Log() << "using a single discriminant variable at a time. A test event " << Endl;
2714 Log() << "ending up after the sequence of left-right splits in a final " << Endl;
2715 Log() << "(\"leaf\") node is classified as either signal or background" << Endl;
2716 Log() << "depending on the majority type of training events in that node." << Endl;
2717 Log() << Endl;
2718 Log() << gTools().Color("bold") << "--- Performance optimisation:" << gTools().Color("reset") << Endl;
2719 Log() << Endl;
2720 Log() << "By the nature of the binary splits performed on the individual" << Endl;
2721 Log() << "variables, decision trees do not deal well with linear correlations" << Endl;
2722 Log() << "between variables (they need to approximate the linear split in" << Endl;
2723 Log() << "the two dimensional space by a sequence of splits on the two " << Endl;
2724 Log() << "variables individually). Hence decorrelation could be useful " << Endl;
2725 Log() << "to optimise the BDT performance." << Endl;
2726 Log() << Endl;
2727 Log() << gTools().Color("bold") << "--- Performance tuning via configuration options:" << gTools().Color("reset") << Endl;
2728 Log() << Endl;
2729 Log() << "The two most important parameters in the configuration are the " << Endl;
2730 Log() << "minimal number of events requested by a leaf node as percentage of the " <<Endl;
2731 Log() << " number of training events (option \"MinNodeSize\" replacing the actual number " << Endl;
2732 Log() << " of events \"nEventsMin\" as given in earlier versions" << Endl;
2733 Log() << "If this number is too large, detailed features " << Endl;
2734 Log() << "in the parameter space are hard to be modelled. If it is too small, " << Endl;
2735 Log() << "the risk to overtrain rises and boosting seems to be less effective" << Endl;
2736 Log() << " typical values from our current experience for best performance " << Endl;
2737 Log() << " are between 0.5(%) and 10(%) " << Endl;
2738 Log() << Endl;
2739 Log() << "The default minimal number is currently set to " << Endl;
2740 Log() << " max(20, (N_training_events / N_variables^2 / 10)) " << Endl;
2741 Log() << "and can be changed by the user." << Endl;
2742 Log() << Endl;
2743 Log() << "The other crucial parameter, the pruning strength (\"PruneStrength\")," << Endl;
2744 Log() << "is also related to overtraining. It is a regularisation parameter " << Endl;
2745 Log() << "that is used when determining after the training which splits " << Endl;
2746 Log() << "are considered statistically insignificant and are removed. The" << Endl;
2747 Log() << "user is advised to carefully watch the BDT screen output for" << Endl;
2748 Log() << "the comparison between efficiencies obtained on the training and" << Endl;
2749 Log() << "the independent test sample. They should be equal within statistical" << Endl;
2750 Log() << "errors, in order to minimize statistical fluctuations in different samples." << Endl;
2751}
2752
2753////////////////////////////////////////////////////////////////////////////////
2754/// Make ROOT-independent C++ class for classifier response (classifier-specific implementation).
2755
2756void TMVA::MethodBDT::MakeClassSpecific( std::ostream& fout, const TString& className ) const
2757{
2758 TString nodeName = className;
2759 nodeName.ReplaceAll("Read","");
2760 nodeName.Append("Node");
2761 // write BDT-specific classifier response
2762 fout << " std::vector<"<<nodeName<<"*> fForest; // i.e. root nodes of decision trees" << std::endl;
2763 fout << " std::vector<double> fBoostWeights; // the weights applied in the individual boosts" << std::endl;
2764 fout << "};" << std::endl << std::endl;
2765
2766 if(GetAnalysisType() == Types::kMulticlass) {
2767 fout << "std::vector<double> ReadBDTG::GetMulticlassValues__( const std::vector<double>& inputValues ) const" << std::endl;
2768 fout << "{" << std::endl;
2769 fout << " uint nClasses = " << DataInfo().GetNClasses() << ";" << std::endl;
2770 fout << " std::vector<double> fMulticlassReturnVal;" << std::endl;
2771 fout << " fMulticlassReturnVal.reserve(nClasses);" << std::endl;
2772 fout << std::endl;
2773 fout << " std::vector<double> temp(nClasses);" << std::endl;
2774 fout << " auto forestSize = fForest.size();" << std::endl;
2775 fout << " // trees 0, nClasses, 2*nClasses, ... belong to class 0" << std::endl;
2776 fout << " // trees 1, nClasses+1, 2*nClasses+1, ... belong to class 1 and so forth" << std::endl;
2777 fout << " uint classOfTree = 0;" << std::endl;
2778 fout << " for (uint itree = 0; itree < forestSize; ++itree) {" << std::endl;
2779 fout << " BDTGNode *current = fForest[itree];" << std::endl;
2780 fout << " while (current->GetNodeType() == 0) { //intermediate node" << std::endl;
2781 fout << " if (current->GoesRight(inputValues)) current=(BDTGNode*)current->GetRight();" << std::endl;
2782 fout << " else current=(BDTGNode*)current->GetLeft();" << std::endl;
2783 fout << " }" << std::endl;
2784 fout << " temp[classOfTree] += current->GetResponse();" << std::endl;
2785 fout << " if (++classOfTree == nClasses) classOfTree = 0; // cheap modulo" << std::endl;
2786 fout << " }" << std::endl;
2787 fout << std::endl;
2788 fout << " // we want to calculate sum of exp(temp[j] - temp[i]) for all i,j (i!=j)" << std::endl;
2789 fout << " // first calculate exp(), then replace minus with division." << std::endl;
2790 fout << " std::transform(temp.begin(), temp.end(), temp.begin(), [](double d){return exp(d);});" << std::endl;
2791 fout << std::endl;
2792 fout << " for(uint iClass=0; iClass<nClasses; iClass++){" << std::endl;
2793 fout << " double norm = 0.0;" << std::endl;
2794 fout << " for(uint j=0;j<nClasses;j++){" << std::endl;
2795 fout << " if(iClass!=j)" << std::endl;
2796 fout << " norm += temp[j] / temp[iClass];" << std::endl;
2797 fout << " }" << std::endl;
2798 fout << " fMulticlassReturnVal.push_back(1.0/(1.0+norm));" << std::endl;
2799 fout << " }" << std::endl;
2800 fout << std::endl;
2801 fout << " return fMulticlassReturnVal;" << std::endl;
2802 fout << "}" << std::endl;
2803 } else {
2804 fout << "double " << className << "::GetMvaValue__( const std::vector<double>& inputValues ) const" << std::endl;
2805 fout << "{" << std::endl;
2806 fout << " double myMVA = 0;" << std::endl;
2807 if (fDoPreselection){
2808 for (UInt_t ivar = 0; ivar< fIsLowBkgCut.size(); ivar++){
2809 if (fIsLowBkgCut[ivar]){
2810 fout << " if (inputValues["<<ivar<<"] < " << fLowBkgCut[ivar] << ") return -1; // is background preselection cut" << std::endl;
2811 }
2812 if (fIsLowSigCut[ivar]){
2813 fout << " if (inputValues["<<ivar<<"] < "<< fLowSigCut[ivar] << ") return 1; // is signal preselection cut" << std::endl;
2814 }
2815 if (fIsHighBkgCut[ivar]){
2816 fout << " if (inputValues["<<ivar<<"] > "<<fHighBkgCut[ivar] <<") return -1; // is background preselection cut" << std::endl;
2817 }
2818 if (fIsHighSigCut[ivar]){
2819 fout << " if (inputValues["<<ivar<<"] > "<<fHighSigCut[ivar]<<") return 1; // is signal preselection cut" << std::endl;
2820 }
2821 }
2822 }
2823
2824 if (fBoostType!="Grad"){
2825 fout << " double norm = 0;" << std::endl;
2826 }
2827 fout << " for (unsigned int itree=0; itree<fForest.size(); itree++){" << std::endl;
2828 fout << " "<<nodeName<<" *current = fForest[itree];" << std::endl;
2829 fout << " while (current->GetNodeType() == 0) { //intermediate node" << std::endl;
2830 fout << " if (current->GoesRight(inputValues)) current=("<<nodeName<<"*)current->GetRight();" << std::endl;
2831 fout << " else current=("<<nodeName<<"*)current->GetLeft();" << std::endl;
2832 fout << " }" << std::endl;
2833 if (fBoostType=="Grad"){
2834 fout << " myMVA += current->GetResponse();" << std::endl;
2835 }else{
2836 if (fUseYesNoLeaf) fout << " myMVA += fBoostWeights[itree] * current->GetNodeType();" << std::endl;
2837 else fout << " myMVA += fBoostWeights[itree] * current->GetPurity();" << std::endl;
2838 fout << " norm += fBoostWeights[itree];" << std::endl;
2839 }
2840 fout << " }" << std::endl;
2841 if (fBoostType=="Grad"){
2842 fout << " return 2.0/(1.0+exp(-2.0*myMVA))-1.0;" << std::endl;
2843 }
2844 else fout << " return myMVA /= norm;" << std::endl;
2845 fout << "}" << std::endl << std::endl;
2846 }
2847
2848 fout << "void " << className << "::Initialize()" << std::endl;
2849 fout << "{" << std::endl;
2850 fout << " double inf = std::numeric_limits<double>::infinity();" << std::endl;
2851 fout << " double nan = std::numeric_limits<double>::quiet_NaN();" << std::endl;
2852 //Now for each decision tree, write directly the constructors of the nodes in the tree structure
2853 for (UInt_t itree=0; itree<GetNTrees(); itree++) {
2854 fout << " // itree = " << itree << std::endl;
2855 fout << " fBoostWeights.push_back(" << fBoostWeights[itree] << ");" << std::endl;
2856 fout << " fForest.push_back( " << std::endl;
2857 this->MakeClassInstantiateNode((DecisionTreeNode*)fForest[itree]->GetRoot(), fout, className);
2858 fout <<" );" << std::endl;
2859 }
2860 fout << " return;" << std::endl;
2861 fout << "};" << std::endl;
2862 fout << std::endl;
2863 fout << "// Clean up" << std::endl;
2864 fout << "inline void " << className << "::Clear() " << std::endl;
2865 fout << "{" << std::endl;
2866 fout << " for (unsigned int itree=0; itree<fForest.size(); itree++) { " << std::endl;
2867 fout << " delete fForest[itree]; " << std::endl;
2868 fout << " }" << std::endl;
2869 fout << "}" << std::endl;
2870 fout << std::endl;
2871}
2872
2873////////////////////////////////////////////////////////////////////////////////
2874/// Specific class header.
2875
2876void TMVA::MethodBDT::MakeClassSpecificHeader( std::ostream& fout, const TString& className) const
2877{
2878 TString nodeName = className;
2879 nodeName.ReplaceAll("Read","");
2880 nodeName.Append("Node");
2881 fout << "#include <algorithm>" << std::endl;
2882 fout << "#include <limits>" << std::endl;
2883 fout << std::endl;
2884 //fout << "#ifndef NN" << std::endl; commented out on purpose see next line
2885 fout << "#define NN new "<<nodeName << std::endl; // NN definition depends on individual methods. Important to have NO #ifndef if several BDT methods compile together
2886 //fout << "#endif" << std::endl; commented out on purpose see previous line
2887 fout << std::endl;
2888 fout << "#ifndef "<<nodeName<<"__def" << std::endl;
2889 fout << "#define "<<nodeName<<"__def" << std::endl;
2890 fout << std::endl;
2891 fout << "class "<<nodeName<<" {" << std::endl;
2892 fout << std::endl;
2893 fout << "public:" << std::endl;
2894 fout << std::endl;
2895 fout << " // constructor of an essentially \"empty\" node floating in space" << std::endl;
2896 fout << " "<<nodeName<<" ( "<<nodeName<<"* left,"<<nodeName<<"* right," << std::endl;
2897 if (fUseFisherCuts){
2898 fout << " int nFisherCoeff," << std::endl;
2899 for (UInt_t i=0;i<GetNVariables()+1;i++){
2900 fout << " double fisherCoeff"<<i<<"," << std::endl;
2901 }
2902 }
2903 fout << " int selector, double cutValue, bool cutType, " << std::endl;
2904 fout << " int nodeType, double purity, double response ) :" << std::endl;
2905 fout << " fLeft ( left )," << std::endl;
2906 fout << " fRight ( right )," << std::endl;
2907 if (fUseFisherCuts) fout << " fNFisherCoeff ( nFisherCoeff )," << std::endl;
2908 fout << " fSelector ( selector )," << std::endl;
2909 fout << " fCutValue ( cutValue )," << std::endl;
2910 fout << " fCutType ( cutType )," << std::endl;
2911 fout << " fNodeType ( nodeType )," << std::endl;
2912 fout << " fPurity ( purity )," << std::endl;
2913 fout << " fResponse ( response ){" << std::endl;
2914 if (fUseFisherCuts){
2915 for (UInt_t i=0;i<GetNVariables()+1;i++){
2916 fout << " fFisherCoeff.push_back(fisherCoeff"<<i<<");" << std::endl;
2917 }
2918 }
2919 fout << " }" << std::endl << std::endl;
2920 fout << " virtual ~"<<nodeName<<"();" << std::endl << std::endl;
2921 fout << " // test event if it descends the tree at this node to the right" << std::endl;
2922 fout << " virtual bool GoesRight( const std::vector<double>& inputValues ) const;" << std::endl;
2923 fout << " "<<nodeName<<"* GetRight( void ) {return fRight; };" << std::endl << std::endl;
2924 fout << " // test event if it descends the tree at this node to the left " << std::endl;
2925 fout << " virtual bool GoesLeft ( const std::vector<double>& inputValues ) const;" << std::endl;
2926 fout << " "<<nodeName<<"* GetLeft( void ) { return fLeft; }; " << std::endl << std::endl;
2927 fout << " // return S/(S+B) (purity) at this node (from training)" << std::endl << std::endl;
2928 fout << " double GetPurity( void ) const { return fPurity; } " << std::endl;
2929 fout << " // return the node type" << std::endl;
2930 fout << " int GetNodeType( void ) const { return fNodeType; }" << std::endl;
2931 fout << " double GetResponse(void) const {return fResponse;}" << std::endl << std::endl;
2932 fout << "private:" << std::endl << std::endl;
2933 fout << " "<<nodeName<<"* fLeft; // pointer to the left daughter node" << std::endl;
2934 fout << " "<<nodeName<<"* fRight; // pointer to the right daughter node" << std::endl;
2935 if (fUseFisherCuts){
2936 fout << " int fNFisherCoeff; // =0 if this node doesn't use fisher, else =nvar+1 " << std::endl;
2937 fout << " std::vector<double> fFisherCoeff; // the fisher coeff (offset at the last element)" << std::endl;
2938 }
2939 fout << " int fSelector; // index of variable used in node selection (decision tree) " << std::endl;
2940 fout << " double fCutValue; // cut value applied on this node to discriminate bkg against sig" << std::endl;
2941 fout << " bool fCutType; // true: if event variable > cutValue ==> signal , false otherwise" << std::endl;
2942 fout << " int fNodeType; // Type of node: -1 == Bkg-leaf, 1 == Signal-leaf, 0 = internal " << std::endl;
2943 fout << " double fPurity; // Purity of node from training"<< std::endl;
2944 fout << " double fResponse; // Regression response value of node" << std::endl;
2945 fout << "}; " << std::endl;
2946 fout << std::endl;
2947 fout << "//_______________________________________________________________________" << std::endl;
2948 fout << " "<<nodeName<<"::~"<<nodeName<<"()" << std::endl;
2949 fout << "{" << std::endl;
2950 fout << " if (fLeft != NULL) delete fLeft;" << std::endl;
2951 fout << " if (fRight != NULL) delete fRight;" << std::endl;
2952 fout << "}; " << std::endl;
2953 fout << std::endl;
2954 fout << "//_______________________________________________________________________" << std::endl;
2955 fout << "bool "<<nodeName<<"::GoesRight( const std::vector<double>& inputValues ) const" << std::endl;
2956 fout << "{" << std::endl;
2957 fout << " // test event if it descends the tree at this node to the right" << std::endl;
2958 fout << " bool result;" << std::endl;
2959 if (fUseFisherCuts){
2960 fout << " if (fNFisherCoeff == 0){" << std::endl;
2961 fout << " result = (inputValues[fSelector] >= fCutValue );" << std::endl;
2962 fout << " }else{" << std::endl;
2963 fout << " double fisher = fFisherCoeff.at(fFisherCoeff.size()-1);" << std::endl;
2964 fout << " for (unsigned int ivar=0; ivar<fFisherCoeff.size()-1; ivar++)" << std::endl;
2965 fout << " fisher += fFisherCoeff.at(ivar)*inputValues.at(ivar);" << std::endl;
2966 fout << " result = fisher > fCutValue;" << std::endl;
2967 fout << " }" << std::endl;
2968 }else{
2969 fout << " result = (inputValues[fSelector] >= fCutValue );" << std::endl;
2970 }
2971 fout << " if (fCutType == true) return result; //the cuts are selecting Signal ;" << std::endl;
2972 fout << " else return !result;" << std::endl;
2973 fout << "}" << std::endl;
2974 fout << std::endl;
2975 fout << "//_______________________________________________________________________" << std::endl;
2976 fout << "bool "<<nodeName<<"::GoesLeft( const std::vector<double>& inputValues ) const" << std::endl;
2977 fout << "{" << std::endl;
2978 fout << " // test event if it descends the tree at this node to the left" << std::endl;
2979 fout << " if (!this->GoesRight(inputValues)) return true;" << std::endl;
2980 fout << " else return false;" << std::endl;
2981 fout << "}" << std::endl;
2982 fout << std::endl;
2983 fout << "#endif" << std::endl;
2984 fout << std::endl;
2985}
2986
2987////////////////////////////////////////////////////////////////////////////////
2988/// Recursively descends a tree and writes the node instance to the output stream.
2989
2990void TMVA::MethodBDT::MakeClassInstantiateNode( DecisionTreeNode *n, std::ostream& fout, const TString& className ) const
2991{
2992 if (n == NULL) {
2993 Log() << kFATAL << "MakeClassInstantiateNode: started with undefined node" <<Endl;
2994 return ;
2995 }
2996 fout << "NN("<<std::endl;
2997 if (n->GetLeft() != NULL){
2998 this->MakeClassInstantiateNode( (DecisionTreeNode*)n->GetLeft() , fout, className);
2999 }
3000 else {
3001 fout << "0";
3002 }
3003 fout << ", " <<std::endl;
3004 if (n->GetRight() != NULL){
3005 this->MakeClassInstantiateNode( (DecisionTreeNode*)n->GetRight(), fout, className );
3006 }
3007 else {
3008 fout << "0";
3009 }
3010 fout << ", " << std::endl
3011 << std::setprecision(6);
3012 if (fUseFisherCuts){
3013 fout << n->GetNFisherCoeff() << ", ";
3014 for (UInt_t i=0; i< GetNVariables()+1; i++) {
3015 if (n->GetNFisherCoeff() == 0 ){
3016 fout << "0, ";
3017 }else{
3018 fout << n->GetFisherCoeff(i) << ", ";
3019 }
3020 }
3021 }
3022 fout << n->GetSelector() << ", "
3023 << n->GetCutValue() << ", "
3024 << n->GetCutType() << ", "
3025 << n->GetNodeType() << ", "
3026 << n->GetPurity() << ","
3027 << n->GetResponse() << ") ";
3028}
3029
3030////////////////////////////////////////////////////////////////////////////////
3031/// Find useful preselection cuts that will be applied before
3032/// and Decision Tree training.. (and of course also applied
3033/// in the GetMVA .. --> -1 for background +1 for Signal)
3034
3035void TMVA::MethodBDT::DeterminePreselectionCuts(const std::vector<const TMVA::Event*>& eventSample)
3036{
3037 Double_t nTotS = 0.0, nTotB = 0.0;
3038
3039 std::vector<TMVA::BDTEventWrapper> bdtEventSample;
3040
3041 fIsLowSigCut.assign(GetNvar(),kFALSE);
3042 fIsLowBkgCut.assign(GetNvar(),kFALSE);
3043 fIsHighSigCut.assign(GetNvar(),kFALSE);
3044 fIsHighBkgCut.assign(GetNvar(),kFALSE);
3045
3046 fLowSigCut.assign(GetNvar(),0.); // ---------------| --> in var is signal (accept all above lower cut)
3047 fLowBkgCut.assign(GetNvar(),0.); // ---------------| --> in var is bkg (accept all above lower cut)
3048 fHighSigCut.assign(GetNvar(),0.); // <-- | -------------- in var is signal (accept all blow cut)
3049 fHighBkgCut.assign(GetNvar(),0.); // <-- | -------------- in var is blg (accept all blow cut)
3050
3051
3052 // Initialize (un)weighted counters for signal & background
3053 // Construct a list of event wrappers that point to the original data
3054 for( std::vector<const TMVA::Event*>::const_iterator it = eventSample.begin(); it != eventSample.end(); ++it ) {
3055 if (DataInfo().IsSignal(*it)){
3056 nTotS += (*it)->GetWeight();
3057 }
3058 else {
3059 nTotB += (*it)->GetWeight();
3060 }
3061 bdtEventSample.push_back(TMVA::BDTEventWrapper(*it));
3062 }
3063
3064 for( UInt_t ivar = 0; ivar < GetNvar(); ivar++ ) { // loop over all discriminating variables
3065 TMVA::BDTEventWrapper::SetVarIndex(ivar); // select the variable to sort by
3066 std::sort( bdtEventSample.begin(),bdtEventSample.end() ); // sort the event data
3067
3068 Double_t bkgWeightCtr = 0.0, sigWeightCtr = 0.0;
3069 std::vector<TMVA::BDTEventWrapper>::iterator it = bdtEventSample.begin(), it_end = bdtEventSample.end();
3070 for( ; it != it_end; ++it ) {
3071 if (DataInfo().IsSignal(**it))
3072 sigWeightCtr += (**it)->GetWeight();
3073 else
3074 bkgWeightCtr += (**it)->GetWeight();
3075 // Store the accumulated signal (background) weights
3076 it->SetCumulativeWeight(false,bkgWeightCtr);
3077 it->SetCumulativeWeight(true,sigWeightCtr);
3078 }
3079
3080 //variable that determines how "exact" you cut on the preselection found in the training data. Here I chose
3081 //1% of the variable range...
3082 Double_t dVal = (DataInfo().GetVariableInfo(ivar).GetMax() - DataInfo().GetVariableInfo(ivar).GetMin())/100. ;
3083 Double_t nSelS, nSelB, effS=0.05, effB=0.05, rejS=0.05, rejB=0.05;
3084 Double_t tmpEffS, tmpEffB, tmpRejS, tmpRejB;
3085 // Locate the optimal cut for this (ivar-th) variable
3086
3087
3088
3089 for(UInt_t iev = 1; iev < bdtEventSample.size(); iev++) {
3090 //dVal = bdtEventSample[iev].GetVal() - bdtEventSample[iev-1].GetVal();
3091
3092 nSelS = bdtEventSample[iev].GetCumulativeWeight(true);
3093 nSelB = bdtEventSample[iev].GetCumulativeWeight(false);
3094 // you look for some 100% efficient pre-selection cut to remove background.. i.e. nSelS=0 && nSelB>5%nTotB or ( nSelB=0 nSelS>5%nTotS)
3095 tmpEffS=nSelS/nTotS;
3096 tmpEffB=nSelB/nTotB;
3097 tmpRejS=1-tmpEffS;
3098 tmpRejB=1-tmpEffB;
3099 if (nSelS==0 && tmpEffB>effB) {effB=tmpEffB; fLowBkgCut[ivar] = bdtEventSample[iev].GetVal() - dVal; fIsLowBkgCut[ivar]=kTRUE;}
3100 else if (nSelB==0 && tmpEffS>effS) {effS=tmpEffS; fLowSigCut[ivar] = bdtEventSample[iev].GetVal() - dVal; fIsLowSigCut[ivar]=kTRUE;}
3101 else if (nSelB==nTotB && tmpRejS>rejS) {rejS=tmpRejS; fHighSigCut[ivar] = bdtEventSample[iev].GetVal() + dVal; fIsHighSigCut[ivar]=kTRUE;}
3102 else if (nSelS==nTotS && tmpRejB>rejB) {rejB=tmpRejB; fHighBkgCut[ivar] = bdtEventSample[iev].GetVal() + dVal; fIsHighBkgCut[ivar]=kTRUE;}
3103
3104 }
3105 }
3106
3107 Log() << kDEBUG << " \tfound and suggest the following possible pre-selection cuts " << Endl;
3108 if (fDoPreselection) Log() << kDEBUG << "\tthe training will be done after these cuts... and GetMVA value returns +1, (-1) for a signal (bkg) event that passes these cuts" << Endl;
3109 else Log() << kDEBUG << "\tas option DoPreselection was not used, these cuts however will not be performed, but the training will see the full sample"<<Endl;
3110 for (UInt_t ivar=0; ivar < GetNvar(); ivar++ ) { // loop over all discriminating variables
3111 if (fIsLowBkgCut[ivar]){
3112 Log() << kDEBUG << " \tfound cut: Bkg if var " << ivar << " < " << fLowBkgCut[ivar] << Endl;
3113 }
3114 if (fIsLowSigCut[ivar]){
3115 Log() << kDEBUG << " \tfound cut: Sig if var " << ivar << " < " << fLowSigCut[ivar] << Endl;
3116 }
3117 if (fIsHighBkgCut[ivar]){
3118 Log() << kDEBUG << " \tfound cut: Bkg if var " << ivar << " > " << fHighBkgCut[ivar] << Endl;
3119 }
3120 if (fIsHighSigCut[ivar]){
3121 Log() << kDEBUG << " \tfound cut: Sig if var " << ivar << " > " << fHighSigCut[ivar] << Endl;
3122 }
3123 }
3124
3125 return;
3126}
3127
3128////////////////////////////////////////////////////////////////////////////////
3129/// Apply the preselection cuts before even bothering about any
3130/// Decision Trees in the GetMVA .. --> -1 for background +1 for Signal
3131
3133{
3134 Double_t result=0;
3135
3136 for (UInt_t ivar=0; ivar < GetNvar(); ivar++ ) { // loop over all discriminating variables
3137 if (fIsLowBkgCut[ivar]){
3138 if (ev->GetValue(ivar) < fLowBkgCut[ivar]) result = -1; // is background
3139 }
3140 if (fIsLowSigCut[ivar]){
3141 if (ev->GetValue(ivar) < fLowSigCut[ivar]) result = 1; // is signal
3142 }
3143 if (fIsHighBkgCut[ivar]){
3144 if (ev->GetValue(ivar) > fHighBkgCut[ivar]) result = -1; // is background
3145 }
3146 if (fIsHighSigCut[ivar]){
3147 if (ev->GetValue(ivar) > fHighSigCut[ivar]) result = 1; // is signal
3148 }
3149 }
3150
3151 return result;
3152}
3153
#define REGISTER_METHOD(CLASS)
for example
#define d(i)
Definition RSha256.hxx:102
#define f(i)
Definition RSha256.hxx:104
#define h(i)
Definition RSha256.hxx:106
#define e(i)
Definition RSha256.hxx:103
unsigned int UInt_t
Definition RtypesCore.h:46
float Float_t
Definition RtypesCore.h:57
constexpr Bool_t kFALSE
Definition RtypesCore.h:101
double Double_t
Definition RtypesCore.h:59
long long Long64_t
Definition RtypesCore.h:80
constexpr Bool_t kTRUE
Definition RtypesCore.h:100
#define ClassImp(name)
Definition Rtypes.h:377
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t Int_t Int_t Window_t TString Int_t GCValues_t GetPrimarySelectionOwner GetDisplay GetScreen GetColormap GetNativeEvent const char const char dpyName wid window const char font_name cursor keysym reg const char only_if_exist regb h Point_t winding char text const char depth char const char Int_t count const char ColorStruct_t color const char Pixmap_t Pixmap_t PictureAttributes_t attr const char char ret_data h unsigned char height h Atom_t Int_t ULong_t ULong_t unsigned char prop_list Atom_t Atom_t target
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t r
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t result
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t Int_t Int_t Window_t TString Int_t GCValues_t GetPrimarySelectionOwner GetDisplay GetScreen GetColormap GetNativeEvent const char const char dpyName wid window const char font_name cursor keysym reg const char only_if_exist regb h Point_t winding char text const char depth char const char Int_t count const char ColorStruct_t color const char Pixmap_t Pixmap_t PictureAttributes_t attr const char char ret_data h unsigned char height h Atom_t Int_t ULong_t ULong_t unsigned char prop_list Atom_t Atom_t Atom_t Time_t type
void Print(GNN_Data &d, std::string txt="")
A pseudo container class which is a generator of indices.
Definition TSeq.hxx:67
A TGraph is an object made of two arrays X and Y with npoints each.
Definition TGraph.h:41
virtual void SetPoint(Int_t i, Double_t x, Double_t y)
Set x and y values for point number i.
Definition TGraph.cxx:2319
Int_t GetN() const
Definition TGraph.h:131
void SetName(const char *name="") override
Set graph name.
Definition TGraph.cxx:2358
void SetTitle(const char *title="") override
Change (i.e.
Definition TGraph.cxx:2374
virtual void Set(Int_t n)
Set number of points in the graph Existing coordinates are preserved New coordinates above fNpoints a...
Definition TGraph.cxx:2254
1-D histogram with a float per channel (see TH1 documentation)
Definition TH1.h:621
1-D histogram with an int per channel (see TH1 documentation)
Definition TH1.h:539
TH1 is the base class of all histogram classes in ROOT.
Definition TH1.h:59
virtual void SetXTitle(const char *title)
Definition TH1.h:418
virtual Int_t Fill(Double_t x)
Increment bin with abscissa X by 1.
Definition TH1.cxx:3340
virtual void SetBinContent(Int_t bin, Double_t content)
Set bin content see convention for numbering bins in TH1::GetBin In case the bin number is greater th...
Definition TH1.cxx:9186
virtual void SetYTitle(const char *title)
Definition TH1.h:419
2-D histogram with a float per channel (see TH1 documentation)
Definition TH2.h:295
Service class for 2-D histogram classes.
Definition TH2.h:30
Absolute Deviation BDT Loss Function.
static void SetVarIndex(Int_t iVar)
UInt_t GetNNodes() const
Definition BinaryTree.h:86
Executor & GetThreadExecutor()
Get executor class for multi-thread usage In case when MT is not enabled will return a serial executo...
Definition Config.h:81
static Config & Instance()
static function: returns TMVA instance
Definition Config.cxx:98
Implementation of the CrossEntropy as separation criterion.
Class that contains all the data information.
Definition DataSetInfo.h:62
static void SetIsTraining(bool on)
Implementation of a Decision Tree.
TMVA::DecisionTreeNode * GetEventNode(const TMVA::Event &e) const
get the pointer to the leaf node where a particular event ends up in... (used in gradient boosting)
static DecisionTree * CreateFromXML(void *node, UInt_t tmva_Version_Code=262657)
re-create a new tree (decision tree or search tree) from XML
Double_t CheckEvent(const TMVA::Event *, Bool_t UseYesNoLeaf=kFALSE) const
the event e is put into the decision tree (starting at the root node) and the output is NodeType (sig...
Float_t GetValue(UInt_t ivar) const
return value of i'th variable
Definition Event.cxx:236
void SetTarget(UInt_t itgt, Float_t value)
set the target value (dimension itgt) to value
Definition Event.cxx:367
Float_t GetTarget(UInt_t itgt) const
Definition Event.h:102
void Foreach(Function func, unsigned int nTimes, unsigned nChunks=0)
wrap TExecutor::Foreach
Definition Executor.h:117
auto Map(F func, unsigned nTimes) -> std::vector< InvokeResult_t< F > >
Wrap TExecutor::Map functions.
Definition Executor.h:140
unsigned int GetPoolSize() const
Definition Executor.h:106
Implementation of the GiniIndex With Laplace correction as separation criterion.
Implementation of the GiniIndex as separation criterion.
Definition GiniIndex.h:63
Huber BDT Loss Function.
The TMVA::Interval Class.
Definition Interval.h:61
Least Squares BDT Loss Function.
The TMVA::Interval Class.
Definition LogInterval.h:83
Analysis of Boosted Decision Trees.
Definition MethodBDT.h:63
void Init(void)
Common initialisation with defaults for the BDT-Method.
static const Int_t fgDebugLevel
debug level determining some printout/control plots etc.
Definition MethodBDT.h:302
MethodBDT(const TString &jobName, const TString &methodTitle, DataSetInfo &theData, const TString &theOption="")
The standard constructor for the "boosted decision trees".
void BoostMonitor(Int_t iTree)
Fills the ROCIntegral vs Itree from the testSample for the monitoring plots during the training .
const std::vector< Float_t > & GetMulticlassValues()
Get the multiclass MVA response for the BDT classifier.
Double_t AdaBoostR2(std::vector< const TMVA::Event * > &, DecisionTree *dt)
Adaption of the AdaBoost to regression problems (see H.Drucker 1997).
Double_t PrivateGetMvaValue(const TMVA::Event *ev, Double_t *err=nullptr, Double_t *errUpper=nullptr, UInt_t useNTrees=0)
Return the MVA value (range [-1;1]) that classifies the event according to the majority vote from the...
void MakeClassSpecific(std::ostream &, const TString &) const
Make ROOT-independent C++ class for classifier response (classifier-specific implementation).
void GetHelpMessage() const
Get help message text.
LossFunctionBDT * fRegressionLossFunctionBDTG
Definition MethodBDT.h:299
void DeterminePreselectionCuts(const std::vector< const TMVA::Event * > &eventSample)
Find useful preselection cuts that will be applied before and Decision Tree training.
Double_t GradBoost(std::vector< const TMVA::Event * > &, DecisionTree *dt, UInt_t cls=0)
Calculate the desired response value for each region.
const Ranking * CreateRanking()
Compute ranking of input variables.
virtual void SetTuneParameters(std::map< TString, Double_t > tuneParameters)
Set the tuning parameters according to the argument.
Double_t AdaCost(std::vector< const TMVA::Event * > &, DecisionTree *dt)
The AdaCost boosting algorithm takes a simple cost Matrix (currently fixed for all events....
void DeclareOptions()
Define the options (their key words).
Double_t GetMvaValue(Double_t *err=nullptr, Double_t *errUpper=nullptr)
virtual std::map< TString, Double_t > OptimizeTuningParameters(TString fomType="ROCIntegral", TString fitType="FitGA")
Call the Optimizer with the set of parameters and ranges that are meant to be tuned.
Double_t Boost(std::vector< const TMVA::Event * > &, DecisionTree *dt, UInt_t cls=0)
Apply the boosting algorithm (the algorithm is selecte via the "option" given in the constructor.
Double_t TestTreeQuality(DecisionTree *dt)
Test the tree quality.. in terms of Misclassification.
Double_t Bagging()
Call it boot-strapping, re-sampling or whatever you like, in the end it is nothing else but applying ...
void UpdateTargets(std::vector< const TMVA::Event * > &, UInt_t cls=0)
Calculate residual for all events.
void UpdateTargetsRegression(std::vector< const TMVA::Event * > &, Bool_t first=kFALSE)
Calculate residuals for all events and update targets for next iter.
Double_t GradBoostRegression(std::vector< const TMVA::Event * > &, DecisionTree *dt)
Implementation of M_TreeBoost using any loss function as described by Friedman 1999.
void WriteMonitoringHistosToFile(void) const
Here we could write some histograms created during the processing to the output file.
virtual ~MethodBDT(void)
Destructor.
void AddWeightsXMLTo(void *parent) const
Write weights to XML.
Double_t GetGradBoostMVA(const TMVA::Event *e, UInt_t nTrees)
Returns MVA value: -1 for background, 1 for signal.
virtual Bool_t HasAnalysisType(Types::EAnalysisType type, UInt_t numberClasses, UInt_t numberTargets)
BDT can handle classification with multiple classes and regression with one regression-target.
Double_t RegBoost(std::vector< const TMVA::Event * > &, DecisionTree *dt)
A special boosting only for Regression (not implemented).
void InitEventSample()
Initialize the event sample (i.e. reset the boost-weights... etc).
Double_t ApplyPreselectionCuts(const Event *ev)
Apply the preselection cuts before even bothering about any Decision Trees in the GetMVA .
void SetMinNodeSize(Double_t sizeInPercent)
void ProcessOptions()
The option string is decoded, for available options see "DeclareOptions".
void PreProcessNegativeEventWeights()
O.k.
void MakeClassInstantiateNode(DecisionTreeNode *n, std::ostream &fout, const TString &className) const
Recursively descends a tree and writes the node instance to the output stream.
Double_t AdaBoost(std::vector< const TMVA::Event * > &, DecisionTree *dt)
The AdaBoost implementation.
TTree * fMonitorNtuple
monitoring ntuple
Definition MethodBDT.h:264
std::vector< Double_t > GetVariableImportance()
Return the relative variable importance, normalized to all variables together having the importance 1...
void InitGradBoost(std::vector< const TMVA::Event * > &)
Initialize targets for first tree.
void Train(void)
BDT training.
void GetBaggedSubSample(std::vector< const TMVA::Event * > &)
Fills fEventSample with fBaggedSampleFraction*NEvents random training events.
const std::vector< Float_t > & GetRegressionValues()
Get the regression value generated by the BDTs.
SeparationBase * fSepType
the separation used in node splitting
Definition MethodBDT.h:229
void ReadWeightsFromXML(void *parent)
Reads the BDT from the xml file.
void ReadWeightsFromStream(std::istream &istr)
Read the weights (BDT coefficients).
void Reset(void)
Reset the method, as if it had just been instantiated (forget all training etc.).
void MakeClassSpecificHeader(std::ostream &, const TString &) const
Specific class header.
void DeclareCompatibilityOptions()
Options that are used ONLY for the READER to ensure backward compatibility.
Virtual base Class for all MVA method.
Definition MethodBase.h:111
virtual void DeclareCompatibilityOptions()
options that are used ONLY for the READER to ensure backward compatibility they are hence without any...
Implementation of the MisClassificationError as separation criterion.
std::map< TString, Double_t > optimize()
PDF wrapper for histograms; uses user-defined spline interpolation.
Definition PDF.h:63
@ kSpline3
Definition PDF.h:70
Ranking for variables in method (implementation)
Definition Ranking.h:48
Class that is the base-class for a vector of result.
Definition Results.h:57
TGraph * GetGraph(const TString &alias) const
Definition Results.cxx:153
void Store(TObject *obj, const char *alias=nullptr)
Definition Results.cxx:86
TH1 * GetHist(const TString &alias) const
Definition Results.cxx:136
Implementation of the SdivSqrtSplusB as separation criterion.
Timing information for training and evaluation of MVA methods.
Definition Timer.h:58
TString GetElapsedTime(Bool_t Scientific=kTRUE)
returns pretty string with elapsed time
Definition Timer.cxx:146
void DrawProgressBar(Int_t, const TString &comment="")
draws progress bar in color or B&W caution:
Definition Timer.cxx:202
const TString & Color(const TString &)
human readable color strings
Definition Tools.cxx:828
void ReadAttr(void *node, const char *, T &value)
read attribute from xml
Definition Tools.h:329
void * GetChild(void *parent, const char *childname=nullptr)
get child node
Definition Tools.cxx:1150
void AddAttr(void *node, const char *, const T &value, Int_t precision=16)
add attribute to xml
Definition Tools.h:347
void * AddChild(void *parent, const char *childname, const char *content=nullptr, bool isRootNode=false)
add child node
Definition Tools.cxx:1124
void UsefulSortAscending(std::vector< std::vector< Double_t > > &, std::vector< TString > *vs=nullptr)
sort 2D vector (AND in parallel a TString vector) in such a way that the "first vector is sorted" and...
Definition Tools.cxx:538
std::vector< TMatrixDSym * > * CalcCovarianceMatrices(const std::vector< Event * > &events, Int_t maxCls, VariableTransformBase *transformBase=nullptr)
compute covariance matrices
Definition Tools.cxx:1513
void * GetNextChild(void *prevchild, const char *childname=nullptr)
XML helpers.
Definition Tools.cxx:1162
Singleton class for Global types used by TMVA.
Definition Types.h:71
@ kMulticlass
Definition Types.h:129
@ kClassification
Definition Types.h:127
@ kMaxAnalysisType
Definition Types.h:131
@ kRegression
Definition Types.h:128
@ kTraining
Definition Types.h:143
Double_t Determinant() const override
TMatrixTSym< Element > & Invert(Double_t *det=nullptr)
Invert the matrix and calculate its determinant Notice that the LU decomposition is used instead of B...
const char * GetTitle() const override
Returns title of object.
Definition TNamed.h:48
virtual Int_t Write(const char *name=nullptr, Int_t option=0, Int_t bufsize=0)
Write this object to the current directory.
Definition TObject.cxx:880
virtual void Delete(Option_t *option="")
Delete this object.
Definition TObject.cxx:248
virtual Int_t Read(const char *name)
Read contents of object with specified name from the current directory.
Definition TObject.cxx:648
Random number generator class based on M.
Definition TRandom3.h:27
virtual Double_t PoissonD(Double_t mean)
Generates a random number according to a Poisson law.
Definition TRandom.cxx:461
Basic string class.
Definition TString.h:139
Double_t Atof() const
Return floating-point value contained in string.
Definition TString.cxx:2054
Bool_t IsFloat() const
Returns kTRUE if string contains a floating point or integer number.
Definition TString.cxx:1858
TString & ReplaceAll(const TString &s1, const TString &s2)
Definition TString.h:704
TString & Append(const char *cs)
Definition TString.h:572
static TString Format(const char *fmt,...)
Static method which formats a string using a printf style format descriptor and return a TString.
Definition TString.cxx:2378
A TTree represents a columnar dataset.
Definition TTree.h:79
const Int_t n
Definition legend1.C:16
TGraphErrors * gr
Definition legend1.C:25
TSeq< unsigned int > TSeqU
Definition TSeq.hxx:204
create variable transformations
Tools & gTools()
MsgLogger & Endl(MsgLogger &ml)
Definition MsgLogger.h:148
Short_t Max(Short_t a, Short_t b)
Returns the largest of a and b.
Definition TMathBase.h:250
Double_t Exp(Double_t x)
Returns the base-e exponential function of x, which is e raised to the power x.
Definition TMath.h:709
Int_t FloorNint(Double_t x)
Returns the nearest integer of TMath::Floor(x).
Definition TMath.h:686
Double_t Log(Double_t x)
Returns the natural logarithm of x.
Definition TMath.h:756
Double_t Sqrt(Double_t x)
Returns the square root of x.
Definition TMath.h:662
LongDouble_t Power(LongDouble_t x, LongDouble_t y)
Returns x raised to the power y.
Definition TMath.h:721
Int_t CeilNint(Double_t x)
Returns the nearest integer of TMath::Ceil(x).
Definition TMath.h:674
Short_t Min(Short_t a, Short_t b)
Returns the smallest of a and b.
Definition TMathBase.h:198
Short_t Abs(Short_t d)
Returns the absolute value of parameter Short_t d.
Definition TMathBase.h:123
static uint64_t sum(uint64_t i)
Definition Factory.cxx:2345