Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TMVA_Higgs_Classification.C File Reference

Detailed Description

View in nbviewer Open in SWAN Classification example of TMVA based on public Higgs UCI dataset

The UCI data set is a public HIGGS data set , see http://archive.ics.uci.edu/ml/datasets/HIGGS used in this paper: Baldi, P., P. Sadowski, and D. Whiteson. “Searching for Exotic Particles in High-energy Physics with Deep Learning.” Nature Communications 5 (July 2, 2014).

******************************************************************************
*Tree :sig_tree : tree *
*Entries : 10000 : Total = 1177229 bytes File Size = 785298 *
* : : Tree compression factor = 1.48 *
******************************************************************************
*Br 0 :Type : Type/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 307 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 130.54 *
*............................................................................*
*Br 1 :lepton_pT : lepton_pT/F *
*Entries : 10000 : Total Size= 40581 bytes File Size = 30464 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 2 :lepton_eta : lepton_eta/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 28650 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.40 *
*............................................................................*
*Br 3 :lepton_phi : lepton_phi/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 30508 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 4 :missing_energy_magnitude : missing_energy_magnitude/F *
*Entries : 10000 : Total Size= 40656 bytes File Size = 35749 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.12 *
*............................................................................*
*Br 5 :missing_energy_phi : missing_energy_phi/F *
*Entries : 10000 : Total Size= 40626 bytes File Size = 36766 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.09 *
*............................................................................*
*Br 6 :jet1_pt : jet1_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 32298 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.24 *
*............................................................................*
*Br 7 :jet1_eta : jet1_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28467 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.41 *
*............................................................................*
*Br 8 :jet1_phi : jet1_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30399 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 9 :jet1_b-tag : jet1_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 5087 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 7.88 *
*............................................................................*
*Br 10 :jet2_pt : jet2_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 31561 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.27 *
*............................................................................*
*Br 11 :jet2_eta : jet2_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28616 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.40 *
*............................................................................*
*Br 12 :jet2_phi : jet2_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30547 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 13 :jet2_b-tag : jet2_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 5031 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 7.97 *
*............................................................................*
*Br 14 :jet3_pt : jet3_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 30642 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 15 :jet3_eta : jet3_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28955 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.38 *
*............................................................................*
*Br 16 :jet3_phi : jet3_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30433 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 17 :jet3_b-tag : jet3_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 4879 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 8.22 *
*............................................................................*
*Br 18 :jet4_pt : jet4_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 29189 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.37 *
*............................................................................*
*Br 19 :jet4_eta : jet4_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 29311 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.37 *
*............................................................................*
*Br 20 :jet4_phi : jet4_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30525 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 21 :jet4_b-tag : jet4_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 4725 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 8.48 *
*............................................................................*
*Br 22 :m_jj : m_jj/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 34991 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.15 *
*............................................................................*
*Br 23 :m_jjj : m_jjj/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34460 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 24 :m_lv : m_lv/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 32232 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.24 *
*............................................................................*
*Br 25 :m_jlv : m_jlv/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34598 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 26 :m_bb : m_bb/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 35012 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.14 *
*............................................................................*
*Br 27 :m_wbb : m_wbb/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34493 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 28 :m_wwbb : m_wwbb/F *
*Entries : 10000 : Total Size= 40566 bytes File Size = 34410 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sig_tree of type Signal with 10000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg_tree of type Background with 10000 events
Factory : Booking method: ␛[1mLikelihood␛[0m
:
Factory : Booking method: ␛[1mFisher␛[0m
:
Factory : Booking method: ␛[1mBDT␛[0m
:
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree sig_tree
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree bkg_tree
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 7000
: Signal -- testing events : 3000
: Signal -- training and testing events: 10000
: Background -- training events : 7000
: Background -- testing events : 3000
: Background -- training and testing events: 10000
:
DataSetInfo : Correlation matrix (Signal):
: ----------------------------------------------------------------
: m_jj m_jjj m_lv m_jlv m_bb m_wbb m_wwbb
: m_jj: +1.000 +0.774 -0.004 +0.096 +0.024 +0.512 +0.533
: m_jjj: +0.774 +1.000 -0.010 +0.073 +0.152 +0.674 +0.668
: m_lv: -0.004 -0.010 +1.000 +0.121 -0.027 +0.009 +0.021
: m_jlv: +0.096 +0.073 +0.121 +1.000 +0.313 +0.544 +0.552
: m_bb: +0.024 +0.152 -0.027 +0.313 +1.000 +0.445 +0.333
: m_wbb: +0.512 +0.674 +0.009 +0.544 +0.445 +1.000 +0.915
: m_wwbb: +0.533 +0.668 +0.021 +0.552 +0.333 +0.915 +1.000
: ----------------------------------------------------------------
DataSetInfo : Correlation matrix (Background):
: ----------------------------------------------------------------
: m_jj m_jjj m_lv m_jlv m_bb m_wbb m_wwbb
: m_jj: +1.000 +0.808 +0.022 +0.150 +0.028 +0.407 +0.415
: m_jjj: +0.808 +1.000 +0.041 +0.206 +0.177 +0.569 +0.547
: m_lv: +0.022 +0.041 +1.000 +0.139 +0.037 +0.081 +0.085
: m_jlv: +0.150 +0.206 +0.139 +1.000 +0.309 +0.607 +0.557
: m_bb: +0.028 +0.177 +0.037 +0.309 +1.000 +0.625 +0.447
: m_wbb: +0.407 +0.569 +0.081 +0.607 +0.625 +1.000 +0.884
: m_wwbb: +0.415 +0.547 +0.085 +0.557 +0.447 +0.884 +1.000
: ----------------------------------------------------------------
DataSetFactory : [dataset] :
:
Factory : Booking method: ␛[1mDNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=30,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=30,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "G" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "1|1|7" [The Layout of the input]
: BatchLayout: "1|128|7" [The Layout of the batch]
: Layout: "DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=30,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0." [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
DNN_CPU : [dataset] : Create Transformation "G" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'm_jj' <---> Output : variable 'm_jj'
: Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
: Input : variable 'm_lv' <---> Output : variable 'm_lv'
: Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
: Input : variable 'm_bb' <---> Output : variable 'm_bb'
: Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
: Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
: Will now use the CPU architecture with BLAS and IMT support !
Factory : ␛[1mTrain all methods␛[0m
Factory : [dataset] : Create Transformation "I" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'm_jj' <---> Output : variable 'm_jj'
: Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
: Input : variable 'm_lv' <---> Output : variable 'm_lv'
: Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
: Input : variable 'm_bb' <---> Output : variable 'm_bb'
: Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
: Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0318 0.65629 [ 0.15106 16.132 ]
: m_jjj: 1.0217 0.37420 [ 0.34247 8.9401 ]
: m_lv: 1.0507 0.16678 [ 0.26679 3.6823 ]
: m_jlv: 1.0161 0.40288 [ 0.38441 6.5831 ]
: m_bb: 0.97707 0.53961 [ 0.080986 8.2551 ]
: m_wbb: 1.0358 0.36856 [ 0.38503 6.4013 ]
: m_wwbb: 0.96265 0.31608 [ 0.43228 4.5350 ]
: -----------------------------------------------------------
: Ranking input variables (method unspecific)...
IdTransformation : Ranking result (top variable is best ranked)
: -------------------------------
: Rank : Variable : Separation
: -------------------------------
: 1 : m_bb : 9.511e-02
: 2 : m_wbb : 4.268e-02
: 3 : m_wwbb : 4.178e-02
: 4 : m_jjj : 2.825e-02
: 5 : m_jlv : 1.999e-02
: 6 : m_jj : 3.834e-03
: 7 : m_lv : 3.699e-03
: -------------------------------
Factory : Train method: Likelihood for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ Likelihood ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: The maximum-likelihood classifier models the data with probability
: density functions (PDF) reproducing the signal and background
: distributions of the input variables. Correlations among the
: variables are ignored.
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Required for good performance are decorrelated input variables
: (PCA transformation via the option "VarTransform=Decorrelate"
: may be tried). Irreducible non-linear correlations may be reduced
: by precombining strongly correlated input variables, or by simply
: removing one of the variables.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: High fidelity PDF estimates are mandatory, i.e., sufficient training
: statistics is required to populate the tails of the distributions
: It would be a surprise if the default Spline or KDE kernel parameters
: provide a satisfying fit to the data. The user is advised to properly
: tune the events per bin and smooth options in the spline cases
: individually per variable. If the KDE kernel is used, the adaptive
: Gaussian kernel may lead to artefacts, so please always also try
: the non-adaptive one.
:
: All tuning parameters must be adjusted individually for each input
: variable!
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Filling reference histograms
: Building PDF out of reference histograms
: Elapsed time for training with 14000 events: 0.11 sec
Likelihood : [dataset] : Evaluation of Likelihood on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.0211 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.class.C␛[0m
: Higgs_ClassificationOutput.root:/dataset/Method_Likelihood/Likelihood
Factory : Training finished
:
Factory : Train method: Fisher for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ Fisher ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: Fisher discriminants select events by distinguishing the mean
: values of the signal and background distributions in a trans-
: formed variable space where linear correlations are removed.
:
: (More precisely: the "linear discriminator" determines
: an axis in the (correlated) hyperspace of the input
: variables such that, when projecting the output classes
: (signal and background) upon this axis, they are pushed
: as far as possible away from each other, while events
: of a same class are confined in a close vicinity. The
: linearity property of this classifier is reflected in the
: metric with which "far apart" and "close vicinity" are
: determined: the covariance matrix of the discriminating
: variable space.)
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Optimal performance for Fisher discriminants is obtained for
: linearly correlated Gaussian-distributed variables. Any deviation
: from this ideal reduces the achievable separation power. In
: particular, no discrimination at all is achieved for a variable
: that has the same sample mean for signal and background, even if
: the shapes of the distributions are very different. Thus, Fisher
: discriminants often benefit from suitable transformations of the
: input variables. For example, if a variable x in [-1,1] has a
: a parabolic signal distributions, and a uniform background
: distributions, their mean value is zero in both cases, leading
: to no separation. The simple transformation x -> |x| renders this
: variable powerful for the use in a Fisher discriminant.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: <None>
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
Fisher : Results for Fisher coefficients:
: -----------------------
: Variable: Coefficient:
: -----------------------
: m_jj: -0.051
: m_jjj: +0.192
: m_lv: +0.045
: m_jlv: +0.059
: m_bb: -0.211
: m_wbb: +0.549
: m_wwbb: -0.778
: (offset): +0.136
: -----------------------
: Elapsed time for training with 14000 events: 0.0109 sec
Fisher : [dataset] : Evaluation of Fisher on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.00361 sec
: <CreateMVAPdfs> Separation from histogram (PDF): 0.090 (0.000)
: Dataset[dataset] : Evaluation of Fisher on training sample
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.class.C␛[0m
Factory : Training finished
:
Factory : Train method: BDT for Classification
:
BDT : #events: (reweighted) sig: 7000 bkg: 7000
: #events: (unweighted) sig: 7000 bkg: 7000
: Training 200 Decision Trees ... patience please
: Elapsed time for training with 14000 events: 0.641 sec
BDT : [dataset] : Evaluation of BDT on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.109 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.class.C␛[0m
: Higgs_ClassificationOutput.root:/dataset/Method_BDT/BDT
Factory : Training finished
:
Factory : Train method: DNN_CPU for Classification
:
: Preparing the Gaussian transformation...
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0053380 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
: Start of deep neural network training on CPU using MT, nthreads = 1
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0053380 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 5 Input = ( 1, 1, 7 ) Batch size = 128 Loss function = C
Layer 0 DENSE Layer: ( Input = 7 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 1 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 2 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 3 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 4 DENSE Layer: ( Input = 64 , Width = 1 ) Output = ( 1 , 128 , 1 ) Activation Function = Identity
: Using 11200 events for training and 2800 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.898212
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.648308 0.615409 0.588319 0.0469683 20570.8 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.596115 0.607234 0.5885 0.0469197 20562 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.582551 0.587928 0.587636 0.0469352 20595.5 0
: 4 Minimum Test error found - save the configuration
: 4 | 0.574918 0.58734 0.588219 0.0469768 20574.9 0
: 5 | 0.569111 0.592709 0.588281 0.0468355 20567.2 1
: 6 | 0.566063 0.590565 0.588652 0.0468429 20553.4 2
: 7 | 0.561753 0.588462 0.589292 0.0469229 20532.1 3
: 8 | 0.561043 0.590466 0.591335 0.0469885 20457.6 4
: 9 Minimum Test error found - save the configuration
: 9 | 0.557767 0.584082 0.590118 0.0471498 20509.5 0
: 10 | 0.554705 0.587407 0.591344 0.0470164 20458.3 1
: 11 | 0.553061 0.585519 0.591356 0.047055 20459.3 2
: 12 | 0.549956 0.588495 0.590951 0.0470676 20475 3
: 13 Minimum Test error found - save the configuration
: 13 | 0.548546 0.58181 0.591274 0.0473053 20471.8 0
: 14 | 0.548366 0.592678 0.595044 0.047786 20348.7 1
: 15 | 0.545645 0.593081 0.597874 0.0490096 20289.2 2
: 16 | 0.546139 0.583952 0.599606 0.0479908 20188 3
: 17 | 0.543401 0.583931 0.598368 0.0474886 20214.9 4
: 18 | 0.540713 0.586488 0.597481 0.047404 20244.5 5
: 19 | 0.541934 0.58532 0.597572 0.0474971 20244.5 6
: 20 | 0.539343 0.592528 0.597906 0.0474594 20230.8 7
: 21 | 0.536809 0.588048 0.598255 0.0482922 20248.7 8
: 22 | 0.533684 0.584249 0.597612 0.0474416 20241 9
: 23 | 0.53343 0.594454 0.599516 0.0475513 20175.2 10
: 24 | 0.532347 0.588459 0.600384 0.0475858 20144.8 11
:
: Elapsed time for training with 14000 events: 14.4 sec
: Evaluate deep neural network on CPU using batches with size = 128
:
DNN_CPU : [dataset] : Evaluation of DNN_CPU on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.247 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.class.C␛[0m
Factory : Training finished
:
: Ranking input variables (method specific)...
Likelihood : Ranking result (top variable is best ranked)
: -------------------------------------
: Rank : Variable : Delta Separation
: -------------------------------------
: 1 : m_bb : 4.061e-02
: 2 : m_wbb : 3.765e-02
: 3 : m_wwbb : 3.119e-02
: 4 : m_jj : -1.589e-03
: 5 : m_jjj : -2.901e-03
: 6 : m_lv : -7.919e-03
: 7 : m_jlv : -8.293e-03
: -------------------------------------
Fisher : Ranking result (top variable is best ranked)
: ---------------------------------
: Rank : Variable : Discr. power
: ---------------------------------
: 1 : m_bb : 1.279e-02
: 2 : m_wwbb : 9.131e-03
: 3 : m_wbb : 2.668e-03
: 4 : m_jlv : 9.145e-04
: 5 : m_jjj : 1.769e-04
: 6 : m_lv : 6.617e-05
: 7 : m_jj : 6.707e-06
: ---------------------------------
BDT : Ranking result (top variable is best ranked)
: ----------------------------------------
: Rank : Variable : Variable Importance
: ----------------------------------------
: 1 : m_bb : 2.089e-01
: 2 : m_wwbb : 1.673e-01
: 3 : m_wbb : 1.568e-01
: 4 : m_jlv : 1.560e-01
: 5 : m_jjj : 1.421e-01
: 6 : m_jj : 1.052e-01
: 7 : m_lv : 6.369e-02
: ----------------------------------------
: No variable ranking supplied by classifier: DNN_CPU
TH1.Print Name = TrainingHistory_DNN_CPU_trainingError, Entries= 0, Total sum= 13.3657
TH1.Print Name = TrainingHistory_DNN_CPU_valError, Entries= 0, Total sum= 14.1606
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: Likelihood for Classification performance
:
Likelihood : [dataset] : Evaluation of Likelihood on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0113 sec
Factory : Test method: Fisher for Classification performance
:
Fisher : [dataset] : Evaluation of Fisher on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.00323 sec
: Dataset[dataset] : Evaluation of Fisher on testing sample
Factory : Test method: BDT for Classification performance
:
BDT : [dataset] : Evaluation of BDT on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0477 sec
Factory : Test method: DNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.017919 1.0069 [ -3.3498 3.4247 ]
: m_jjj: 0.020352 1.0044 [ -3.2831 3.3699 ]
: m_lv: 0.016289 0.99263 [ -3.2339 3.3958 ]
: m_jlv: -0.018431 0.98242 [ -3.0632 5.7307 ]
: m_bb: 0.0069564 0.98851 [ -2.9734 3.3513 ]
: m_wbb: -0.010633 0.99340 [ -3.2442 3.2244 ]
: m_wwbb: -0.012669 0.99259 [ -3.1871 5.7307 ]
: -----------------------------------------------------------
DNN_CPU : [dataset] : Evaluation of DNN_CPU on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0992 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: Likelihood
:
Likelihood : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_Likelihood : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: Fisher
:
Fisher : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Also filling probability and rarity histograms (on request)...
TFHandler_Fisher : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: BDT
:
BDT : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_BDT : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: DNN_CPU
:
DNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0053380 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.017919 1.0069 [ -3.3498 3.4247 ]
: m_jjj: 0.020352 1.0044 [ -3.2831 3.3699 ]
: m_lv: 0.016289 0.99263 [ -3.2339 3.3958 ]
: m_jlv: -0.018431 0.98242 [ -3.0632 5.7307 ]
: m_bb: 0.0069564 0.98851 [ -2.9734 3.3513 ]
: m_wbb: -0.010633 0.99340 [ -3.2442 3.2244 ]
: m_wwbb: -0.012669 0.99259 [ -3.1871 5.7307 ]
: -----------------------------------------------------------
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset DNN_CPU : 0.760
: dataset BDT : 0.754
: dataset Likelihood : 0.699
: dataset Fisher : 0.642
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset DNN_CPU : 0.139 (0.133) 0.402 (0.439) 0.667 (0.699)
: dataset BDT : 0.098 (0.099) 0.393 (0.402) 0.657 (0.681)
: dataset Likelihood : 0.070 (0.075) 0.356 (0.363) 0.581 (0.597)
: dataset Fisher : 0.015 (0.015) 0.121 (0.131) 0.487 (0.506)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 6000 events
:
Dataset:dataset : Created tree 'TrainTree' with 14000 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m
/***
## Declare Factory
Create the Factory class. Later you can choose the methods
whose performance you'd like to investigate.
The factory is the major TMVA object you have to interact with. Here is the list of parameters you need to pass
- The first argument is the base of the name of all the output
weightfiles in the directory weight/ that will be created with the
method parameters
- The second argument is the output file for the training results
- The third argument is a string option defining some general configuration for the TMVA session. For example all TMVA output can be suppressed by removing the "!" (not) in front of the "Silent" argument in the option string
**/
void TMVA_Higgs_Classification() {
// options to control used methods
bool useLikelihood = true; // likelihood based discriminant
bool useLikelihoodKDE = false; // likelihood based discriminant
bool useFischer = true; // Fischer discriminant
bool useMLP = false; // Multi Layer Perceptron (old TMVA NN implementation)
bool useBDT = true; // BOosted Decision Tree
bool useDL = true; // TMVA Deep learning ( CPU or GPU)
auto outputFile = TFile::Open("Higgs_ClassificationOutput.root", "RECREATE");
TMVA::Factory factory("TMVA_Higgs_Classification", outputFile,
"!V:ROC:!Silent:Color:AnalysisType=Classification" );
/**
## Setup Dataset(s)
Define now input data file and signal and background trees
**/
TString inputFileName = "Higgs_data.root";
TString inputFileLink = "http://root.cern.ch/files/" + inputFileName;
TFile *inputFile = nullptr;
if (!gSystem->AccessPathName(inputFileName)) {
// file exists
inputFile = TFile::Open( inputFileName );
}
if (!inputFile) {
// download file from Cernbox location
Info("TMVA_Higgs_Classification","Download Higgs_data.root file");
inputFile = TFile::Open(inputFileLink, "CACHEREAD");
if (!inputFile) {
Error("TMVA_Higgs_Classification","Input file cannot be downloaded - exit");
return;
}
}
// --- Register the training and test trees
TTree *signalTree = (TTree*)inputFile->Get("sig_tree");
TTree *backgroundTree = (TTree*)inputFile->Get("bkg_tree");
signalTree->Print();
/***
## Declare DataLoader(s)
The next step is to declare the DataLoader class that deals with input variables
Define the input variables that shall be used for the MVA training
note that you may also use variable expressions, which can be parsed by TTree::Draw( "expression" )]
***/
TMVA::DataLoader * loader = new TMVA::DataLoader("dataset");
loader->AddVariable("m_jj");
loader->AddVariable("m_jjj");
loader->AddVariable("m_lv");
loader->AddVariable("m_jlv");
loader->AddVariable("m_bb");
loader->AddVariable("m_wbb");
loader->AddVariable("m_wwbb");
/// We set now the input data trees in the TMVA DataLoader class
// global event weights per tree (see below for setting event-wise weights)
Double_t signalWeight = 1.0;
Double_t backgroundWeight = 1.0;
// You can add an arbitrary number of signal or background trees
loader->AddSignalTree ( signalTree, signalWeight );
loader->AddBackgroundTree( backgroundTree, backgroundWeight );
// Set individual event weights (the variables must exist in the original TTree)
// for signal : factory->SetSignalWeightExpression ("weight1*weight2");
// for background: factory->SetBackgroundWeightExpression("weight1*weight2");
//loader->SetBackgroundWeightExpression( "weight" );
// Apply additional cuts on the signal and background samples (can be different)
TCut mycuts = ""; // for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1";
TCut mycutb = ""; // for example: TCut mycutb = "abs(var1)<0.5";
// Tell the factory how to use the training and testing events
//
// If no numbers of events are given, half of the events in the tree are used
// for training, and the other half for testing:
// loader->PrepareTrainingAndTestTree( mycut, "SplitMode=random:!V" );
// To also specify the number of testing events, use:
loader->PrepareTrainingAndTestTree( mycuts, mycutb,
"nTrain_Signal=7000:nTrain_Background=7000:SplitMode=Random:NormMode=NumEvents:!V" );
/***
## Booking Methods
Here we book the TMVA methods. We book first a Likelihood based on KDE (Kernel Density Estimation), a Fischer discriminant, a BDT
and a shallow neural network
*/
// Likelihood ("naive Bayes estimator")
if (useLikelihood) {
factory.BookMethod(loader, TMVA::Types::kLikelihood, "Likelihood",
"H:!V:TransformOutput:PDFInterpol=Spline2:NSmoothSig[0]=20:NSmoothBkg[0]=20:NSmoothBkg[1]=10:NSmooth=1:NAvEvtPerBin=50" );
}
// Use a kernel density estimator to approximate the PDFs
if (useLikelihoodKDE) {
factory.BookMethod(loader, TMVA::Types::kLikelihood, "LikelihoodKDE",
"!H:!V:!TransformOutput:PDFInterpol=KDE:KDEtype=Gauss:KDEiter=Adaptive:KDEFineFactor=0.3:KDEborder=None:NAvEvtPerBin=50" );
}
// Fisher discriminant (same as LD)
if (useFischer) {
factory.BookMethod(loader, TMVA::Types::kFisher, "Fisher", "H:!V:Fisher:VarTransform=None:CreateMVAPdfs:PDFInterpolMVAPdf=Spline2:NbinsMVAPdf=50:NsmoothMVAPdf=10" );
}
//Boosted Decision Trees
if (useBDT) {
factory.BookMethod(loader,TMVA::Types::kBDT, "BDT",
"!V:NTrees=200:MinNodeSize=2.5%:MaxDepth=2:BoostType=AdaBoost:AdaBoostBeta=0.5:UseBaggedBoost:BaggedSampleFraction=0.5:SeparationType=GiniIndex:nCuts=20" );
}
//Multi-Layer Perceptron (Neural Network)
if (useMLP) {
factory.BookMethod(loader, TMVA::Types::kMLP, "MLP",
"!H:!V:NeuronType=tanh:VarTransform=N:NCycles=100:HiddenLayers=N+5:TestRate=5:!UseRegulator" );
}
/// Here we book the new DNN of TMVA if we have support in ROOT. We will use GPU version if ROOT is enabled with GPU
/***
## Booking Deep Neural Network
Here we define the option string for building the Deep Neural network model.
#### 1. Define DNN layout
The DNN configuration is defined using a string. Note that whitespaces between characters are not allowed.
We define first the DNN layout:
- **input layout** : this defines the input data format for the DNN as ``input depth | height | width``.
In case of a dense layer as first layer the input layout should be ``1 | 1 | number of input variables`` (features)
- **batch layout** : this defines how are the input batch. It is related to input layout but not the same.
If the first layer is dense it should be ``1 | batch size ! number of variables`` (features)
*(note the use of the character `|` as separator of input parameters for DNN layout)*
note that in case of only dense layer the input layout could be omitted but it is required when defining more
complex architectures
- **layer layout** string defining the layer architecture. The syntax is
- layer type (e.g. DENSE, CONV, RNN)
- layer parameters (e.g. number of units)
- activation function (e.g TANH, RELU,...)
*the different layers are separated by the ``","`` *
#### 2. Define Training Strategy
We define here the training strategy parameters for the DNN. The parameters are separated by the ``","`` separator.
One can then concatenate different training strategy with different parameters. The training strategy are separated by
the ``"|"`` separator.
- Optimizer
- Learning rate
- Momentum (valid for SGD and RMSPROP)
- Regularization and Weight Decay
- Dropout
- Max number of epochs
- Convergence steps. if the test error will not decrease after that value the training will stop
- Batch size (This value must be the same specified in the input layout)
- Test Repetitions (the interval when the test error will be computed)
#### 3. Define general DNN options
We define the general DNN options concatenating in the final string the previously defined layout and training strategy.
Note we use the ``":"`` separator to separate the different higher level options, as in the other TMVA methods.
In addition to input layout, batch layout and training strategy we add now:
- Type of Loss function (e.g. CROSSENTROPY)
- Weight Initizalization (e.g XAVIER, XAVIERUNIFORM, NORMAL )
- Variable Transformation
- Type of Architecture (e.g. CPU, GPU, Standard)
We can then book the DL method using the built option string
***/
if (useDL) {
bool useDLGPU = false;
#ifdef R__HAS_TMVAGPU
useDLGPU = true;
#endif
// Define DNN layout
TString inputLayoutString = "InputLayout=1|1|7";
TString batchLayoutString= "BatchLayout=1|128|7";
TString layoutString ("Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR");
// Define Training strategies
// one can catenate several training strategies
TString training1("LearningRate=1e-3,Momentum=0.9,"
"ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,"
"MaxEpochs=30,WeightDecay=1e-4,Regularization=None,"
"Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7," // ADAM default parameters
"DropConfig=0.0+0.0+0.0+0.");
// TString training2("LearningRate=1e-3,Momentum=0.9"
// "ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,"
// "MaxEpochs=20,WeightDecay=1e-4,Regularization=None,"
// "Optimizer=SGD,DropConfig=0.0+0.0+0.0+0.");
TString trainingStrategyString ("TrainingStrategy=");
trainingStrategyString += training1; // + "|" + training2;
// General Options.
TString dnnOptions ("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:"
"WeightInitialization=XAVIER");
dnnOptions.Append (":"); dnnOptions.Append (inputLayoutString);
dnnOptions.Append (":"); dnnOptions.Append (batchLayoutString);
dnnOptions.Append (":"); dnnOptions.Append (layoutString);
dnnOptions.Append (":"); dnnOptions.Append (trainingStrategyString);
TString dnnMethodName = "DNN_CPU";
if (useDLGPU) {
dnnOptions += ":Architecture=GPU";
dnnMethodName = "DNN_GPU";
} else {
dnnOptions += ":Architecture=CPU";
}
factory.BookMethod(loader, TMVA::Types::kDL, dnnMethodName, dnnOptions);
}
/**
## Train Methods
Here we train all the previously booked methods.
*/
factory.TrainAllMethods();
/**
## Test all methods
Now we test and evaluate all methods using the test data set
*/
factory.TestAllMethods();
factory.EvaluateAllMethods();
/// after we get the ROC curve and we display
auto c1 = factory.GetROCCurve(loader);
c1->Draw();
/// at the end we close the output file which contains the evaluation result of all methods and it can be used by TMVAGUI
/// to display additional plots
outputFile->Close();
}
double Double_t
Definition RtypesCore.h:59
void Info(const char *location, const char *msgfmt,...)
Use this function for informational messages.
Definition TError.cxx:220
void Error(const char *location, const char *msgfmt,...)
Use this function in case an error occurred.
Definition TError.cxx:187
R__EXTERN TSystem * gSystem
Definition TSystem.h:559
A specialized string object used for TTree selections.
Definition TCut.h:25
TObject * Get(const char *namecycle) override
Return pointer to object identified by namecycle.
A ROOT file is a suite of consecutive data records (TKey instances) with a well defined format.
Definition TFile.h:54
static Bool_t SetCacheFileDir(ROOT::Internal::TStringView cacheDir, Bool_t operateDisconnected=kTRUE, Bool_t forceCacheread=kFALSE)
Definition TFile.h:326
static TFile * Open(const char *name, Option_t *option="", const char *ftitle="", Int_t compress=ROOT::RCompressionSetting::EDefaults::kUseCompiledDefault, Int_t netopt=0)
Create / open a file.
Definition TFile.cxx:4025
void AddSignalTree(TTree *signal, Double_t weight=1.0, Types::ETreeType treetype=Types::kMaxTreeType)
number of signal events (used to compute significance)
void PrepareTrainingAndTestTree(const TCut &cut, const TString &splitOpt)
prepare the training and test trees -> same cuts for signal and background
void AddBackgroundTree(TTree *background, Double_t weight=1.0, Types::ETreeType treetype=Types::kMaxTreeType)
number of signal events (used to compute significance)
void AddVariable(const TString &expression, const TString &title, const TString &unit, char type='F', Double_t min=0, Double_t max=0)
user inserts discriminating variable in data set info
This is the main MVA steering class.
Definition Factory.h:80
static Tools & Instance()
Definition Tools.cxx:71
@ kFisher
Definition Types.h:82
@ kLikelihood
Definition Types.h:79
Basic string class.
Definition TString.h:136
virtual Bool_t AccessPathName(const char *path, EAccessMode mode=kFileExists)
Returns FALSE if one can access a file using the specified access mode.
Definition TSystem.cxx:1296
A TTree represents a columnar dataset.
Definition TTree.h:79
virtual void Print(Option_t *option="") const
Print a summary of the tree contents.
Definition TTree.cxx:7203
return c1
Definition legend1.C:41
Author
Lorenzo Moneta

Definition in file TMVA_Higgs_Classification.C.