Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TMVAClassification.C File Reference

Detailed Description

View in nbviewer Open in SWAN
This macro provides examples for the training and testing of the TMVA classifiers.

As input data is used a toy-MC sample consisting of four Gaussian-distributed and linearly correlated input variables. The methods to be used can be switched on and off by means of booleans, or via the prompt command, for example:

root -l ./TMVAClassification.C\‍(\"Fisher,Likelihood\"\‍)

(note that the backslashes are mandatory) If no method given, a default set of classifiers is used. The output file "TMVAC.root" can be analysed with the use of dedicated macros (simply say: root -l <macro.C>), which can be conveniently invoked through a GUI that will appear at the end of the run of this macro. Launch the GUI via the command:

root -l ./TMVAGui.C

You can also compile and run the example with the following commands

make
./TMVAClassification <Methods>

where: <Methods> = "method1 method2" are the TMVA classifier names example:

./TMVAClassification Fisher LikelihoodPCA BDT

If no method given, a default set is of classifiers is used

  • Project : TMVA - a ROOT-integrated toolkit for multivariate data analysis
  • Package : TMVA
  • Root Macro: TMVAClassification
==> Start TMVAClassification
--- TMVAClassification : Using input file: ./files/tmva_class_example.root
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree TreeS of type Signal with 6000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree TreeB of type Background with 6000 events
Factory : Booking method: ␛[1mCuts␛[0m
:
: Use optimization method: "Monte Carlo"
: Use efficiency computation method: "Event Selection"
: Use "FSmart" cuts for variable: 'myvar1'
: Use "FSmart" cuts for variable: 'myvar2'
: Use "FSmart" cuts for variable: 'var3'
: Use "FSmart" cuts for variable: 'var4'
Factory : Booking method: ␛[1mCutsD␛[0m
:
CutsD : [dataset] : Create Transformation "Decorrelate" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
: Use optimization method: "Monte Carlo"
: Use efficiency computation method: "Event Selection"
: Use "FSmart" cuts for variable: 'myvar1'
: Use "FSmart" cuts for variable: 'myvar2'
: Use "FSmart" cuts for variable: 'var3'
: Use "FSmart" cuts for variable: 'var4'
Factory : Booking method: ␛[1mLikelihood␛[0m
:
Factory : Booking method: ␛[1mLikelihoodPCA␛[0m
:
LikelihoodPCA : [dataset] : Create Transformation "PCA" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : Booking method: ␛[1mPDERS␛[0m
:
Factory : Booking method: ␛[1mPDEFoam␛[0m
:
Factory : Booking method: ␛[1mKNN␛[0m
:
Factory : Booking method: ␛[1mLD␛[0m
:
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree TreeS
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree TreeB
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 1000
: Signal -- testing events : 5000
: Signal -- training and testing events: 6000
: Background -- training events : 1000
: Background -- testing events : 5000
: Background -- training and testing events: 6000
:
DataSetInfo : Correlation matrix (Signal):
: ----------------------------------------
: myvar1 myvar2 var3 var4
: myvar1: +1.000 +0.038 +0.748 +0.922
: myvar2: +0.038 +1.000 -0.058 +0.128
: var3: +0.748 -0.058 +1.000 +0.831
: var4: +0.922 +0.128 +0.831 +1.000
: ----------------------------------------
DataSetInfo : Correlation matrix (Background):
: ----------------------------------------
: myvar1 myvar2 var3 var4
: myvar1: +1.000 -0.021 +0.783 +0.931
: myvar2: -0.021 +1.000 -0.162 +0.057
: var3: +0.783 -0.162 +1.000 +0.841
: var4: +0.931 +0.057 +0.841 +1.000
: ----------------------------------------
DataSetFactory : [dataset] :
:
Factory : Booking method: ␛[1mFDA_GA␛[0m
:
: Create parameter interval for parameter 0 : [-1,1]
: Create parameter interval for parameter 1 : [-10,10]
: Create parameter interval for parameter 2 : [-10,10]
: Create parameter interval for parameter 3 : [-10,10]
: Create parameter interval for parameter 4 : [-10,10]
: User-defined formula string : "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3"
: TFormula-compatible formula string: "[0]+[1]*[5]+[2]*[6]+[3]*[7]+[4]*[8]"
Factory : Booking method: ␛[1mMLPBNN␛[0m
:
MLPBNN : [dataset] : Create Transformation "N" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
MLPBNN : Building Network.
: Initializing weights
Factory : Booking method: ␛[1mDNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=N:WeightInitialization=XAVIERUNIFORM:Layout=TANH|128,TANH|128,TANH|128,LINEAR:TrainingStrategy=LearningRate=1e-2,Momentum=0.9,ConvergenceSteps=20,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,DropConfig=0.0+0.5+0.5+0.5:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=N:WeightInitialization=XAVIERUNIFORM:Layout=TANH|128,TANH|128,TANH|128,LINEAR:TrainingStrategy=LearningRate=1e-2,Momentum=0.9,ConvergenceSteps=20,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,DropConfig=0.0+0.5+0.5+0.5:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "N" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: Layout: "TANH|128,TANH|128,TANH|128,LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIERUNIFORM" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-2,Momentum=0.9,ConvergenceSteps=20,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,DropConfig=0.0+0.5+0.5+0.5" [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: InputLayout: "0|0|0" [The Layout of the input]
: BatchLayout: "0|0|0" [The Layout of the batch]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
DNN_CPU : [dataset] : Create Transformation "N" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
: Will now use the CPU architecture with BLAS and IMT support !
Factory : Booking method: ␛[1mSVM␛[0m
:
SVM : [dataset] : Create Transformation "Norm" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : Booking method: ␛[1mBDT␛[0m
:
Factory : Booking method: ␛[1mRuleFit␛[0m
:
Factory : ␛[1mTrain all methods␛[0m
Factory : [dataset] : Create Transformation "I" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : [dataset] : Create Transformation "D" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : [dataset] : Create Transformation "P" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : [dataset] : Create Transformation "G" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : [dataset] : Create Transformation "D" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'myvar1' <---> Output : variable 'myvar1'
: Input : variable 'myvar2' <---> Output : variable 'myvar2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.062775 1.7187 [ -9.3380 7.6931 ]
: myvar2: 0.056495 1.0784 [ -3.2551 4.0291 ]
: var3: -0.020366 1.0633 [ -5.2777 4.6430 ]
: var4: 0.13214 1.2464 [ -5.6007 4.6744 ]
: -----------------------------------------------------------
: Preparing the Decorrelation transformation...
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.17586 1.0000 [ -5.6401 4.8529 ]
: myvar2: 0.026952 1.0000 [ -2.9292 3.7065 ]
: var3: -0.11549 1.0000 [ -4.1792 3.5180 ]
: var4: 0.34819 1.0000 [ -3.3363 3.3963 ]
: -----------------------------------------------------------
: Preparing the Principle Component (PCA) transformation...
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.11433 2.2714 [ -11.272 9.0916 ]
: myvar2: -0.0070834 1.0934 [ -3.9875 3.3836 ]
: var3: 0.011107 0.57824 [ -2.0171 2.1958 ]
: var4: -0.0094450 0.33437 [ -1.0176 1.0617 ]
: -----------------------------------------------------------
: Preparing the Gaussian transformation...
: Preparing the Decorrelation transformation...
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.049483 1.0000 [ -3.0916 8.1528 ]
: myvar2: -0.0017889 1.0000 [ -4.5911 5.6465 ]
: var3: -0.0056513 1.0000 [ -3.1504 4.5978 ]
: var4: 0.070934 1.0000 [ -3.4539 5.9256 ]
: -----------------------------------------------------------
: Ranking input variables (method unspecific)...
IdTransformation : Ranking result (top variable is best ranked)
: -------------------------------------
: Rank : Variable : Separation
: -------------------------------------
: 1 : Variable 4 : 2.843e-01
: 2 : Variable 3 : 1.756e-01
: 3 : myvar1 : 1.018e-01
: 4 : Expression 2 : 3.860e-02
: -------------------------------------
Factory : Train method: Cuts for Classification
:
FitterBase : <MCFitter> Sampling, please be patient ...
: Elapsed time: 3.47 sec
: ------------------------------------------
Cuts : Cut values for requested signal efficiency: 0.1
: Corresponding background efficiency : 0.00621902
: Transformation applied to input variables : None
: ------------------------------------------
: Cut[ 0]: -1.19223 < myvar1 <= 1e+30
: Cut[ 1]: -1e+30 < myvar2 <= 2.126
: Cut[ 2]: -2.90978 < var3 <= 1e+30
: Cut[ 3]: 2.16207 < var4 <= 1e+30
: ------------------------------------------
: ------------------------------------------
Cuts : Cut values for requested signal efficiency: 0.2
: Corresponding background efficiency : 0.0171253
: Transformation applied to input variables : None
: ------------------------------------------
: Cut[ 0]: -5.85714 < myvar1 <= 1e+30
: Cut[ 1]: -1e+30 < myvar2 <= 2.21109
: Cut[ 2]: -0.759439 < var3 <= 1e+30
: Cut[ 3]: 1.66846 < var4 <= 1e+30
: ------------------------------------------
: ------------------------------------------
Cuts : Cut values for requested signal efficiency: 0.3
: Corresponding background efficiency : 0.0401486
: Transformation applied to input variables : None
: ------------------------------------------
: Cut[ 0]: -6.09813 < myvar1 <= 1e+30
: Cut[ 1]: -1e+30 < myvar2 <= 2.81831
: Cut[ 2]: -2.09336 < var3 <= 1e+30
: Cut[ 3]: 1.34308 < var4 <= 1e+30
: ------------------------------------------
: ------------------------------------------
Cuts : Cut values for requested signal efficiency: 0.4
: Corresponding background efficiency : 0.062887
: Transformation applied to input variables : None
: ------------------------------------------
: Cut[ 0]: -4.55141 < myvar1 <= 1e+30
: Cut[ 1]: -1e+30 < myvar2 <= 2.94573
: Cut[ 2]: -4.68697 < var3 <= 1e+30
: Cut[ 3]: 1.07157 < var4 <= 1e+30
: ------------------------------------------
: ------------------------------------------
Cuts : Cut values for requested signal efficiency: 0.5
: Corresponding background efficiency : 0.104486
: Transformation applied to input variables : None
: ------------------------------------------
: Cut[ 0]: -5.86032 < myvar1 <= 1e+30
: Cut[ 1]: -1e+30 < myvar2 <= 2.89615
: Cut[ 2]: -0.966191 < var3 <= 1e+30
: Cut[ 3]: 0.773848 < var4 <= 1e+30
: ------------------------------------------
: ------------------------------------------
Cuts : Cut values for requested signal efficiency: 0.6
: Corresponding background efficiency : 0.172806
: Transformation applied to input variables : None
: ------------------------------------------
: Cut[ 0]: -5.52552 < myvar1 <= 1e+30
: Cut[ 1]: -1e+30 < myvar2 <= 4.08498
: Cut[ 2]: -2.61706 < var3 <= 1e+30
: Cut[ 3]: 0.469684 < var4 <= 1e+30
: ------------------------------------------
: ------------------------------------------
Cuts : Cut values for requested signal efficiency: 0.7
: Corresponding background efficiency : 0.258379
: Transformation applied to input variables : None
: ------------------------------------------
: Cut[ 0]: -5.69875 < myvar1 <= 1e+30
: Cut[ 1]: -1e+30 < myvar2 <= 1.73784
: Cut[ 2]: -1.21467 < var3 <= 1e+30
: Cut[ 3]: 0.109026 < var4 <= 1e+30
: ------------------------------------------
: ------------------------------------------
Cuts : Cut values for requested signal efficiency: 0.8
: Corresponding background efficiency : 0.362964
: Transformation applied to input variables : None
: ------------------------------------------
: Cut[ 0]: -1.99372 < myvar1 <= 1e+30
: Cut[ 1]: -1e+30 < myvar2 <= 3.93767
: Cut[ 2]: -1.56317 < var3 <= 1e+30
: Cut[ 3]: -0.124013 < var4 <= 1e+30
: ------------------------------------------
: ------------------------------------------
Cuts : Cut values for requested signal efficiency: 0.9
: Corresponding background efficiency : 0.503885
: Transformation applied to input variables : None
: ------------------------------------------
: Cut[ 0]: -3.97304 < myvar1 <= 1e+30
: Cut[ 1]: -1e+30 < myvar2 <= 3.31284
: Cut[ 2]: -2.82879 < var3 <= 1e+30
: Cut[ 3]: -0.577302 < var4 <= 1e+30
: ------------------------------------------
: Elapsed time for training with 2000 events: 3.47 sec
Cuts : [dataset] : Evaluation of Cuts on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.00186 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_Cuts.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_Cuts.class.C␛[0m
: TMVAC.root:/dataset/Method_Cuts/Cuts
Factory : Training finished
:
Factory : Train method: CutsD for Classification
:
: Preparing the Decorrelation transformation...
TFHandler_CutsD : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.17586 1.0000 [ -5.6401 4.8529 ]
: myvar2: 0.026952 1.0000 [ -2.9292 3.7065 ]
: var3: -0.11549 1.0000 [ -4.1792 3.5180 ]
: var4: 0.34819 1.0000 [ -3.3363 3.3963 ]
: -----------------------------------------------------------
TFHandler_CutsD : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.17586 1.0000 [ -5.6401 4.8529 ]
: myvar2: 0.026952 1.0000 [ -2.9292 3.7065 ]
: var3: -0.11549 1.0000 [ -4.1792 3.5180 ]
: var4: 0.34819 1.0000 [ -3.3363 3.3963 ]
: -----------------------------------------------------------
FitterBase : <MCFitter> Sampling, please be patient ...
: Elapsed time: 2.75 sec
: ------------------------------------------------------------------------------------------------------------------------
CutsD : Cut values for requested signal efficiency: 0.1
: Corresponding background efficiency : 0
: Transformation applied to input variables : "Deco"
: ------------------------------------------------------------------------------------------------------------------------
: Cut[ 0]: -1e+30 < + 1.1476*[myvar1] + 0.027923*[myvar2] - 0.19981*[var3] - 0.82843*[var4] <= 0.513038
: Cut[ 1]: -1e+30 < + 0.027923*[myvar1] + 0.95469*[myvar2] + 0.18581*[var3] - 0.1623*[var4] <= -0.733858
: Cut[ 2]: -0.87113 < - 0.19981*[myvar1] + 0.18581*[myvar2] + 1.7913*[var3] - 0.77231*[var4] <= 1e+30
: Cut[ 3]: 0.687739 < - 0.82843*[myvar1] - 0.1623*[myvar2] - 0.77231*[var3] + 2.1918*[var4] <= 1e+30
: ------------------------------------------------------------------------------------------------------------------------
: ------------------------------------------------------------------------------------------------------------------------
CutsD : Cut values for requested signal efficiency: 0.2
: Corresponding background efficiency : 0.000493656
: Transformation applied to input variables : "Deco"
: ------------------------------------------------------------------------------------------------------------------------
: Cut[ 0]: -1e+30 < + 1.1476*[myvar1] + 0.027923*[myvar2] - 0.19981*[var3] - 0.82843*[var4] <= 1.60056
: Cut[ 1]: -1e+30 < + 0.027923*[myvar1] + 0.95469*[myvar2] + 0.18581*[var3] - 0.1623*[var4] <= 1.26936
: Cut[ 2]: -1.50073 < - 0.19981*[myvar1] + 0.18581*[myvar2] + 1.7913*[var3] - 0.77231*[var4] <= 1e+30
: Cut[ 3]: 1.54845 < - 0.82843*[myvar1] - 0.1623*[myvar2] - 0.77231*[var3] + 2.1918*[var4] <= 1e+30
: ------------------------------------------------------------------------------------------------------------------------
: ------------------------------------------------------------------------------------------------------------------------
CutsD : Cut values for requested signal efficiency: 0.3
: Corresponding background efficiency : 0.00334252
: Transformation applied to input variables : "Deco"
: ------------------------------------------------------------------------------------------------------------------------
: Cut[ 0]: -1e+30 < + 1.1476*[myvar1] + 0.027923*[myvar2] - 0.19981*[var3] - 0.82843*[var4] <= 2.16898
: Cut[ 1]: -1e+30 < + 0.027923*[myvar1] + 0.95469*[myvar2] + 0.18581*[var3] - 0.1623*[var4] <= 3.25932
: Cut[ 2]: -2.08503 < - 0.19981*[myvar1] + 0.18581*[myvar2] + 1.7913*[var3] - 0.77231*[var4] <= 1e+30
: Cut[ 3]: 1.43959 < - 0.82843*[myvar1] - 0.1623*[myvar2] - 0.77231*[var3] + 2.1918*[var4] <= 1e+30
: ------------------------------------------------------------------------------------------------------------------------
: ------------------------------------------------------------------------------------------------------------------------
CutsD : Cut values for requested signal efficiency: 0.4
: Corresponding background efficiency : 0.00821453
: Transformation applied to input variables : "Deco"
: ------------------------------------------------------------------------------------------------------------------------
: Cut[ 0]: -1e+30 < + 1.1476*[myvar1] + 0.027923*[myvar2] - 0.19981*[var3] - 0.82843*[var4] <= 1.9086
: Cut[ 1]: -1e+30 < + 0.027923*[myvar1] + 0.95469*[myvar2] + 0.18581*[var3] - 0.1623*[var4] <= 1.94778
: Cut[ 2]: -2.11471 < - 0.19981*[myvar1] + 0.18581*[myvar2] + 1.7913*[var3] - 0.77231*[var4] <= 1e+30
: Cut[ 3]: 1.1885 < - 0.82843*[myvar1] - 0.1623*[myvar2] - 0.77231*[var3] + 2.1918*[var4] <= 1e+30
: ------------------------------------------------------------------------------------------------------------------------
: ------------------------------------------------------------------------------------------------------------------------
CutsD : Cut values for requested signal efficiency: 0.5
: Corresponding background efficiency : 0.0209024
: Transformation applied to input variables : "Deco"
: ------------------------------------------------------------------------------------------------------------------------
: Cut[ 0]: -1e+30 < + 1.1476*[myvar1] + 0.027923*[myvar2] - 0.19981*[var3] - 0.82843*[var4] <= 3.97301
: Cut[ 1]: -1e+30 < + 0.027923*[myvar1] + 0.95469*[myvar2] + 0.18581*[var3] - 0.1623*[var4] <= 2.87835
: Cut[ 2]: -1.68889 < - 0.19981*[myvar1] + 0.18581*[myvar2] + 1.7913*[var3] - 0.77231*[var4] <= 1e+30
: Cut[ 3]: 0.969507 < - 0.82843*[myvar1] - 0.1623*[myvar2] - 0.77231*[var3] + 2.1918*[var4] <= 1e+30
: ------------------------------------------------------------------------------------------------------------------------
: ------------------------------------------------------------------------------------------------------------------------
CutsD : Cut values for requested signal efficiency: 0.6
: Corresponding background efficiency : 0.055037
: Transformation applied to input variables : "Deco"
: ------------------------------------------------------------------------------------------------------------------------
: Cut[ 0]: -1e+30 < + 1.1476*[myvar1] + 0.027923*[myvar2] - 0.19981*[var3] - 0.82843*[var4] <= 2.57624
: Cut[ 1]: -1e+30 < + 0.027923*[myvar1] + 0.95469*[myvar2] + 0.18581*[var3] - 0.1623*[var4] <= 2.20263
: Cut[ 2]: -3.86902 < - 0.19981*[myvar1] + 0.18581*[myvar2] + 1.7913*[var3] - 0.77231*[var4] <= 1e+30
: Cut[ 3]: 0.802122 < - 0.82843*[myvar1] - 0.1623*[myvar2] - 0.77231*[var3] + 2.1918*[var4] <= 1e+30
: ------------------------------------------------------------------------------------------------------------------------
: ------------------------------------------------------------------------------------------------------------------------
CutsD : Cut values for requested signal efficiency: 0.7
: Corresponding background efficiency : 0.0975699
: Transformation applied to input variables : "Deco"
: ------------------------------------------------------------------------------------------------------------------------
: Cut[ 0]: -1e+30 < + 1.1476*[myvar1] + 0.027923*[myvar2] - 0.19981*[var3] - 0.82843*[var4] <= 3.65719
: Cut[ 1]: -1e+30 < + 0.027923*[myvar1] + 0.95469*[myvar2] + 0.18581*[var3] - 0.1623*[var4] <= 3.19411
: Cut[ 2]: -2.87372 < - 0.19981*[myvar1] + 0.18581*[myvar2] + 1.7913*[var3] - 0.77231*[var4] <= 1e+30
: Cut[ 3]: 0.583961 < - 0.82843*[myvar1] - 0.1623*[myvar2] - 0.77231*[var3] + 2.1918*[var4] <= 1e+30
: ------------------------------------------------------------------------------------------------------------------------
: ------------------------------------------------------------------------------------------------------------------------
CutsD : Cut values for requested signal efficiency: 0.8
: Corresponding background efficiency : 0.170999
: Transformation applied to input variables : "Deco"
: ------------------------------------------------------------------------------------------------------------------------
: Cut[ 0]: -1e+30 < + 1.1476*[myvar1] + 0.027923*[myvar2] - 0.19981*[var3] - 0.82843*[var4] <= 4.74857
: Cut[ 1]: -1e+30 < + 0.027923*[myvar1] + 0.95469*[myvar2] + 0.18581*[var3] - 0.1623*[var4] <= 2.75269
: Cut[ 2]: -3.22043 < - 0.19981*[myvar1] + 0.18581*[myvar2] + 1.7913*[var3] - 0.77231*[var4] <= 1e+30
: Cut[ 3]: 0.327788 < - 0.82843*[myvar1] - 0.1623*[myvar2] - 0.77231*[var3] + 2.1918*[var4] <= 1e+30
: ------------------------------------------------------------------------------------------------------------------------
: ------------------------------------------------------------------------------------------------------------------------
CutsD : Cut values for requested signal efficiency: 0.9
: Corresponding background efficiency : 0.326977
: Transformation applied to input variables : "Deco"
: ------------------------------------------------------------------------------------------------------------------------
: Cut[ 0]: -1e+30 < + 1.1476*[myvar1] + 0.027923*[myvar2] - 0.19981*[var3] - 0.82843*[var4] <= 3.56614
: Cut[ 1]: -1e+30 < + 0.027923*[myvar1] + 0.95469*[myvar2] + 0.18581*[var3] - 0.1623*[var4] <= 3.09071
: Cut[ 2]: -3.9944 < - 0.19981*[myvar1] + 0.18581*[myvar2] + 1.7913*[var3] - 0.77231*[var4] <= 1e+30
: Cut[ 3]: 0.0311777 < - 0.82843*[myvar1] - 0.1623*[myvar2] - 0.77231*[var3] + 2.1918*[var4] <= 1e+30
: ------------------------------------------------------------------------------------------------------------------------
: Elapsed time for training with 2000 events: 2.75 sec
CutsD : [dataset] : Evaluation of CutsD on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.00312 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_CutsD.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_CutsD.class.C␛[0m
: TMVAC.root:/dataset/Method_Cuts/CutsD
Factory : Training finished
:
Factory : Train method: Likelihood for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ Likelihood ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: The maximum-likelihood classifier models the data with probability
: density functions (PDF) reproducing the signal and background
: distributions of the input variables. Correlations among the
: variables are ignored.
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Required for good performance are decorrelated input variables
: (PCA transformation via the option "VarTransform=Decorrelate"
: may be tried). Irreducible non-linear correlations may be reduced
: by precombining strongly correlated input variables, or by simply
: removing one of the variables.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: High fidelity PDF estimates are mandatory, i.e., sufficient training
: statistics is required to populate the tails of the distributions
: It would be a surprise if the default Spline or KDE kernel parameters
: provide a satisfying fit to the data. The user is advised to properly
: tune the events per bin and smooth options in the spline cases
: individually per variable. If the KDE kernel is used, the adaptive
: Gaussian kernel may lead to artefacts, so please always also try
: the non-adaptive one.
:
: All tuning parameters must be adjusted individually for each input
: variable!
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Filling reference histograms
: Building PDF out of reference histograms
: Elapsed time for training with 2000 events: 0.0142 sec
Likelihood : [dataset] : Evaluation of Likelihood on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.00347 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_Likelihood.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_Likelihood.class.C␛[0m
: TMVAC.root:/dataset/Method_Likelihood/Likelihood
Factory : Training finished
:
Factory : Train method: LikelihoodPCA for Classification
:
: Preparing the Principle Component (PCA) transformation...
TFHandler_LikelihoodPCA : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.11433 2.2714 [ -11.272 9.0916 ]
: myvar2: -0.0070834 1.0934 [ -3.9875 3.3836 ]
: var3: 0.011107 0.57824 [ -2.0171 2.1958 ]
: var4: -0.0094450 0.33437 [ -1.0176 1.0617 ]
: -----------------------------------------------------------
: Filling reference histograms
: Building PDF out of reference histograms
: Elapsed time for training with 2000 events: 0.0168 sec
LikelihoodPCA : [dataset] : Evaluation of LikelihoodPCA on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.00532 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_LikelihoodPCA.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_LikelihoodPCA.class.C␛[0m
: TMVAC.root:/dataset/Method_Likelihood/LikelihoodPCA
Factory : Training finished
:
Factory : Train method: PDERS for Classification
:
: Elapsed time for training with 2000 events: 0.00325 sec
PDERS : [dataset] : Evaluation of PDERS on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.257 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_PDERS.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_PDERS.class.C␛[0m
Factory : Training finished
:
Factory : Train method: PDEFoam for Classification
:
PDEFoam : NormMode=NUMEVENTS chosen. Note that only NormMode=EqualNumEvents ensures that Discriminant values correspond to signal probabilities.
: Build up discriminator foam
: Elapsed time: 0.333 sec
: Elapsed time for training with 2000 events: 0.362 sec
PDEFoam : [dataset] : Evaluation of PDEFoam on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.0163 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.weights.xml␛[0m
: writing foam DiscrFoam to file
: Foams written to file: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.weights_foams.root␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.class.C␛[0m
Factory : Training finished
:
Factory : Train method: KNN for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ KNN ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: The k-nearest neighbor (k-NN) algorithm is a multi-dimensional classification
: and regression algorithm. Similarly to other TMVA algorithms, k-NN uses a set of
: training events for which a classification category/regression target is known.
: The k-NN method compares a test event to all training events using a distance
: function, which is an Euclidean distance in a space defined by the input variables.
: The k-NN method, as implemented in TMVA, uses a kd-tree algorithm to perform a
: quick search for the k events with shortest distance to the test event. The method
: returns a fraction of signal events among the k neighbors. It is recommended
: that a histogram which stores the k-NN decision variable is binned with k+1 bins
: between 0 and 1.
:
: ␛[1m--- Performance tuning via configuration options: ␛[0m
:
: The k-NN method estimates a density of signal and background events in a
: neighborhood around the test event. The method assumes that the density of the
: signal and background events is uniform and constant within the neighborhood.
: k is an adjustable parameter and it determines an average size of the
: neighborhood. Small k values (less than 10) are sensitive to statistical
: fluctuations and large (greater than 100) values might not sufficiently capture
: local differences between events in the training set. The speed of the k-NN
: method also increases with larger values of k.
:
: The k-NN method assigns equal weight to all input variables. Different scales
: among the input variables is compensated using ScaleFrac parameter: the input
: variables are scaled so that the widths for central ScaleFrac*100% events are
: equal among all the input variables.
:
: ␛[1m--- Additional configuration options: ␛[0m
:
: The method inclues an option to use a Gaussian kernel to smooth out the k-NN
: response. The kernel re-weights events using a distance to the test event.
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
KNN : <Train> start...
: Reading 2000 events
: Number of signal events 1000
: Number of background events 1000
: Creating kd-tree with 2000 events
: Computing scale factor for 1d distributions: (ifrac, bottom, top) = (80%, 10%, 90%)
ModulekNN : Optimizing tree for 4 variables with 2000 values
: <Fill> Class 1 has 1000 events
: <Fill> Class 2 has 1000 events
: Elapsed time for training with 2000 events: 0.00282 sec
KNN : [dataset] : Evaluation of KNN on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.0411 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_KNN.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_KNN.class.C␛[0m
Factory : Training finished
:
Factory : Train method: LD for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ LD ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: Linear discriminants select events by distinguishing the mean
: values of the signal and background distributions in a trans-
: formed variable space where linear correlations are removed.
: The LD implementation here is equivalent to the "Fisher" discriminant
: for classification, but also provides linear regression.
:
: (More precisely: the "linear discriminator" determines
: an axis in the (correlated) hyperspace of the input
: variables such that, when projecting the output classes
: (signal and background) upon this axis, they are pushed
: as far as possible away from each other, while events
: of a same class are confined in a close vicinity. The
: linearity property of this classifier is reflected in the
: metric with which "far apart" and "close vicinity" are
: determined: the covariance matrix of the discriminating
: variable space.)
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Optimal performance for the linear discriminant is obtained for
: linearly correlated Gaussian-distributed variables. Any deviation
: from this ideal reduces the achievable separation power. In
: particular, no discrimination at all is achieved for a variable
: that has the same sample mean for signal and background, even if
: the shapes of the distributions are very different. Thus, the linear
: discriminant often benefits from a suitable transformation of the
: input variables. For example, if a variable x in [-1,1] has a
: a parabolic signal distributions, and a uniform background
: distributions, their mean value is zero in both cases, leading
: to no separation. The simple transformation x -> |x| renders this
: variable powerful for the use in a linear discriminant.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: <None>
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
LD : Results for LD coefficients:
: -----------------------
: Variable: Coefficient:
: -----------------------
: myvar1: -0.309
: myvar2: -0.102
: var3: -0.142
: var4: +0.705
: (offset): -0.055
: -----------------------
: Elapsed time for training with 2000 events: 0.00109 sec
LD : [dataset] : Evaluation of LD on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.00192 sec
: <CreateMVAPdfs> Separation from histogram (PDF): 0.540 (0.000)
: Dataset[dataset] : Evaluation of LD on training sample
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_LD.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_LD.class.C␛[0m
Factory : Training finished
:
Factory : Train method: FDA_GA for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ FDA_GA ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: The function discriminant analysis (FDA) is a classifier suitable
: to solve linear or simple nonlinear discrimination problems.
:
: The user provides the desired function with adjustable parameters
: via the configuration option string, and FDA fits the parameters to
: it, requiring the signal (background) function value to be as close
: as possible to 1 (0). Its advantage over the more involved and
: automatic nonlinear discriminators is the simplicity and transparency
: of the discrimination expression. A shortcoming is that FDA will
: underperform for involved problems with complicated, phase space
: dependent nonlinear correlations.
:
: Please consult the Users Guide for the format of the formula string
: and the allowed parameter ranges:
: documentation/tmva/UsersGuide/TMVAUsersGuide.pdf
:
: ␛[1m--- Performance optimisation:␛[0m
:
: The FDA performance depends on the complexity and fidelity of the
: user-defined discriminator function. As a general rule, it should
: be able to reproduce the discrimination power of any linear
: discriminant analysis. To reach into the nonlinear domain, it is
: useful to inspect the correlation profiles of the input variables,
: and add quadratic and higher polynomial terms between variables as
: necessary. Comparison with more involved nonlinear classifiers can
: be used as a guide.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: Depending on the function used, the choice of "FitMethod" is
: crucial for getting valuable solutions with FDA. As a guideline it
: is recommended to start with "FitMethod=MINUIT". When more complex
: functions are used where MINUIT does not converge to reasonable
: results, the user should switch to non-gradient FitMethods such
: as GeneticAlgorithm (GA) or Monte Carlo (MC). It might prove to be
: useful to combine GA (or MC) with MINUIT by setting the option
: "Converger=MINUIT". GA (MC) will then set the starting parameters
: for MINUIT such that the basic quality of GA (MC) of finding global
: minima is combined with the efficacy of MINUIT of finding local
: minima.
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
FitterBase : <GeneticFitter> Optimisation, please be patient ... (inaccurate progress timing for GA)
: Elapsed time: 0.906 sec
FDA_GA : Results for parameter fit using "GA" fitter:
: -----------------------
: Parameter: Fit result:
: -----------------------
: Par(0): 0.403652
: Par(1): -0.321045
: Par(2): -0.133219
: Par(3): 0
: Par(4): 0.60348
: -----------------------
: Discriminator expression: "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3"
: Value of estimator at minimum: 0.271076
: Elapsed time for training with 2000 events: 0.941 sec
FDA_GA : [dataset] : Evaluation of FDA_GA on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.00212 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_FDA_GA.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_FDA_GA.class.C␛[0m
Factory : Training finished
:
Factory : Train method: MLPBNN for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ MLPBNN ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: The MLP artificial neural network (ANN) is a traditional feed-
: forward multilayer perceptron implementation. The MLP has a user-
: defined hidden layer architecture, while the number of input (output)
: nodes is determined by the input variables (output classes, i.e.,
: signal and one background).
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Neural networks are stable and performing for a large variety of
: linear and non-linear classification problems. However, in contrast
: to (e.g.) boosted decision trees, the user is advised to reduce the
: number of input variables that have only little discrimination power.
:
: In the tests we have carried out so far, the MLP and ROOT networks
: (TMlpANN, interfaced via TMVA) performed equally well, with however
: a clear speed advantage for the MLP. The Clermont-Ferrand neural
: net (CFMlpANN) exhibited worse classification performance in these
: tests, which is partly due to the slow convergence of its training
: (at least 10k training cycles are required to achieve approximately
: competitive results).
:
: ␛[1mOvertraining: ␛[0monly the TMlpANN performs an explicit separation of the
: full training sample into independent training and validation samples.
: We have found that in most high-energy physics applications the
: available degrees of freedom (training events) are sufficient to
: constrain the weights of the relatively simple architectures required
: to achieve good performance. Hence no overtraining should occur, and
: the use of validation samples would only reduce the available training
: information. However, if the performance on the training sample is
: found to be significantly better than the one found with the inde-
: pendent test sample, caution is needed. The results for these samples
: are printed to standard output at the end of each training job.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: The hidden layer architecture for all ANNs is defined by the option
: "HiddenLayers=N+1,N,...", where here the first hidden layer has N+1
: neurons and the second N neurons (and so on), and where N is the number
: of input variables. Excessive numbers of hidden layers should be avoided,
: in favour of more neurons in the first hidden layer.
:
: The number of cycles should be above 500. As said, if the number of
: adjustable weights is small compared to the training sample size,
: using a large number of training samples should not lead to overtraining.
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
TFHandler_MLPBNN : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.089214 0.20183 [ -1.0000 1.0000 ]
: myvar2: -0.090751 0.29609 [ -1.0000 1.0000 ]
: var3: 0.059878 0.21436 [ -1.0000 1.0000 ]
: var4: 0.11587 0.24261 [ -1.0000 1.0000 ]
: -----------------------------------------------------------
: Training Network
:
: Finalizing handling of Regulator terms, trainE=0.713219 testE=0.724617
: Done with handling of Regulator terms
: Elapsed time for training with 2000 events: 2.55 sec
MLPBNN : [dataset] : Evaluation of MLPBNN on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.0054 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_MLPBNN.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_MLPBNN.class.C␛[0m
: Write special histos to file: TMVAC.root:/dataset/Method_MLP/MLPBNN
Factory : Training finished
:
Factory : Train method: DNN_CPU for Classification
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.089214 0.20183 [ -1.0000 1.0000 ]
: myvar2: -0.090751 0.29609 [ -1.0000 1.0000 ]
: var3: 0.059878 0.21436 [ -1.0000 1.0000 ]
: var4: 0.11587 0.24261 [ -1.0000 1.0000 ]
: -----------------------------------------------------------
: Start of deep neural network training on CPU using MT, nthreads = 1
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.089214 0.20183 [ -1.0000 1.0000 ]
: myvar2: -0.090751 0.29609 [ -1.0000 1.0000 ]
: var3: 0.059878 0.21436 [ -1.0000 1.0000 ]
: var4: 0.11587 0.24261 [ -1.0000 1.0000 ]
: -----------------------------------------------------------
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 4 Input = ( 1, 1, 4 ) Batch size = 100 Loss function = C
Layer 0 DENSE Layer: ( Input = 4 , Width = 128 ) Output = ( 1 , 100 , 128 ) Activation Function = Tanh
Layer 1 DENSE Layer: ( Input = 128 , Width = 128 ) Output = ( 1 , 100 , 128 ) Activation Function = Tanh Dropout prob. = 0.5
Layer 2 DENSE Layer: ( Input = 128 , Width = 128 ) Output = ( 1 , 100 , 128 ) Activation Function = Tanh Dropout prob. = 0.5
Layer 3 DENSE Layer: ( Input = 128 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity Dropout prob. = 0.5
: Using 1600 events for training and 400 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.01 regularization 0 minimum error = 0.694429
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.507814 0.439761 0.191121 0.0147082 9069.63 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.431238 0.428851 0.191527 0.0146252 9044.57 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.428235 0.374323 0.19205 0.0145966 9016.45 0
: 4 | 0.409327 0.397365 0.191735 0.0142196 9013.29 1
: 5 | 0.390523 0.387784 0.192032 0.0142372 8999.15 2
: 6 | 0.383023 0.403835 0.192724 0.0142789 8966.35 3
: 7 | 0.401774 0.390823 0.192696 0.0141965 8963.62 4
: 8 | 0.408494 0.387882 0.192356 0.014245 8983.14 5
: 9 | 0.39869 0.397474 0.192443 0.0142658 8979.83 6
: 10 | 0.399138 0.396164 0.192754 0.014264 8964.08 7
: 11 | 0.40333 0.377691 0.192891 0.0142706 8957.55 8
: 12 | 0.402693 0.433262 0.192968 0.0143046 8955.38 9
: 13 | 0.428961 0.399735 0.19307 0.0142835 8949.23 10
: 14 | 0.407607 0.383499 0.193268 0.0143192 8941.13 11
: 15 | 0.393824 0.380839 0.193683 0.0143459 8921.72 12
: 16 | 0.392961 0.393642 0.193547 0.0143569 8929.08 13
: 17 | 0.387666 0.377482 0.193517 0.0143644 8930.92 14
: 18 Minimum Test error found - save the configuration
: 18 | 0.39795 0.372202 0.193867 0.0148563 8938 0
: 19 | 0.40774 0.382507 0.193278 0.0143694 8943.11 1
: 20 | 0.404206 0.393198 0.193088 0.0143687 8952.6 2
: 21 | 0.412764 0.39693 0.193327 0.0143809 8941.22 3
: 22 | 0.395123 0.389321 0.193654 0.0143764 8924.69 4
: 23 | 0.379121 0.39462 0.193305 0.0143782 8942.22 5
: 24 | 0.388797 0.409096 0.193942 0.0143374 8908.46 6
: 25 | 0.403088 0.396268 0.19332 0.0143686 8940.98 7
: 26 | 0.39255 0.374311 0.193315 0.0143789 8941.74 8
: 27 | 0.38364 0.374422 0.193238 0.0143576 8944.55 9
: 28 | 0.397417 0.391051 0.193478 0.0143558 8932.46 10
: 29 | 0.397022 0.388799 0.193582 0.0143939 8929.16 11
: 30 | 0.385597 0.38193 0.193514 0.0143853 8932.1 12
: 31 | 0.394113 0.387556 0.19357 0.0143783 8929.01 13
: 32 | 0.393905 0.37835 0.193551 0.0143873 8930.39 14
: 33 | 0.40246 0.39334 0.1939 0.014399 8913.62 15
: 34 | 0.385783 0.389971 0.193329 0.0143821 8941.2 16
: 35 | 0.383975 0.378362 0.19353 0.0143771 8930.93 17
: 36 | 0.390537 0.394416 0.19317 0.0143752 8948.8 18
: 37 | 0.389648 0.422574 0.193379 0.0143286 8936.04 19
: 38 | 0.390505 0.381448 0.193388 0.014394 8938.86 20
: 39 | 0.379669 0.422493 0.193722 0.0143954 8922.25 21
:
: Elapsed time for training with 2000 events: 7.56 sec
: Evaluate deep neural network on CPU using batches with size = 100
:
DNN_CPU : [dataset] : Evaluation of DNN_CPU on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.0718 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_DNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_DNN_CPU.class.C␛[0m
Factory : Training finished
:
Factory : Train method: SVM for Classification
:
TFHandler_SVM : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.089214 0.20183 [ -1.0000 1.0000 ]
: myvar2: -0.090751 0.29609 [ -1.0000 1.0000 ]
: var3: 0.059878 0.21436 [ -1.0000 1.0000 ]
: var4: 0.11587 0.24261 [ -1.0000 1.0000 ]
: -----------------------------------------------------------
: Building SVM Working Set...with 2000 event instances
: Elapsed time for Working Set build: 0.0718 sec
: Sorry, no computing time forecast available for SVM, please wait ...
: Elapsed time: 0.356 sec
: Elapsed time for training with 2000 events: 0.432 sec
SVM : [dataset] : Evaluation of SVM on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.0686 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_SVM.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_SVM.class.C␛[0m
Factory : Training finished
:
Factory : Train method: BDT for Classification
:
BDT : #events: (reweighted) sig: 1000 bkg: 1000
: #events: (unweighted) sig: 1000 bkg: 1000
: Training 850 Decision Trees ... patience please
: Elapsed time for training with 2000 events: 0.595 sec
BDT : [dataset] : Evaluation of BDT on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.166 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_BDT.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_BDT.class.C␛[0m
: TMVAC.root:/dataset/Method_BDT/BDT
Factory : Training finished
:
Factory : Train method: RuleFit for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ RuleFit ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: This method uses a collection of so called rules to create a
: discriminating scoring function. Each rule consists of a series
: of cuts in parameter space. The ensemble of rules are created
: from a forest of decision trees, trained using the training data.
: Each node (apart from the root) corresponds to one rule.
: The scoring function is then obtained by linearly combining
: the rules. A fitting procedure is applied to find the optimum
: set of coefficients. The goal is to find a model with few rules
: but with a strong discriminating power.
:
: ␛[1m--- Performance optimisation:␛[0m
:
: There are two important considerations to make when optimising:
:
: 1. Topology of the decision tree forest
: 2. Fitting of the coefficients
:
: The maximum complexity of the rules is defined by the size of
: the trees. Large trees will yield many complex rules and capture
: higher order correlations. On the other hand, small trees will
: lead to a smaller ensemble with simple rules, only capable of
: modeling simple structures.
: Several parameters exists for controlling the complexity of the
: rule ensemble.
:
: The fitting procedure searches for a minimum using a gradient
: directed path. Apart from step size and number of steps, the
: evolution of the path is defined by a cut-off parameter, tau.
: This parameter is unknown and depends on the training data.
: A large value will tend to give large weights to a few rules.
: Similarly, a small value will lead to a large set of rules
: with similar weights.
:
: A final point is the model used; rules and/or linear terms.
: For a given training sample, the result may improve by adding
: linear terms. If best performance is obtained using only linear
: terms, it is very likely that the Fisher discriminant would be
: a better choice. Ideally the fitting procedure should be able to
: make this choice by giving appropriate weights for either terms.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: I. TUNING OF RULE ENSEMBLE:
:
: ␛[1mForestType ␛[0m: Recommended is to use the default "AdaBoost".
: ␛[1mnTrees ␛[0m: More trees leads to more rules but also slow
: performance. With too few trees the risk is
: that the rule ensemble becomes too simple.
: ␛[1mfEventsMin ␛[0m
: ␛[1mfEventsMax ␛[0m: With a lower min, more large trees will be generated
: leading to more complex rules.
: With a higher max, more small trees will be
: generated leading to more simple rules.
: By changing this range, the average complexity
: of the rule ensemble can be controlled.
: ␛[1mRuleMinDist ␛[0m: By increasing the minimum distance between
: rules, fewer and more diverse rules will remain.
: Initially it is a good idea to keep this small
: or zero and let the fitting do the selection of
: rules. In order to reduce the ensemble size,
: the value can then be increased.
:
: II. TUNING OF THE FITTING:
:
: ␛[1mGDPathEveFrac ␛[0m: fraction of events in path evaluation
: Increasing this fraction will improve the path
: finding. However, a too high value will give few
: unique events available for error estimation.
: It is recommended to use the default = 0.5.
: ␛[1mGDTau ␛[0m: cutoff parameter tau
: By default this value is set to -1.0.
: This means that the cut off parameter is
: automatically estimated. In most cases
: this should be fine. However, you may want
: to fix this value if you already know it
: and want to reduce on training time.
: ␛[1mGDTauPrec ␛[0m: precision of estimated tau
: Increase this precision to find a more
: optimum cut-off parameter.
: ␛[1mGDNStep ␛[0m: number of steps in path search
: If the number of steps is too small, then
: the program will give a warning message.
:
: III. WARNING MESSAGES
:
: ␛[1mRisk(i+1)>=Risk(i) in path␛[0m
: ␛[1mChaotic behaviour of risk evolution.␛[0m
: The error rate was still decreasing at the end
: By construction the Risk should always decrease.
: However, if the training sample is too small or
: the model is overtrained, such warnings can
: occur.
: The warnings can safely be ignored if only a
: few (<3) occur. If more warnings are generated,
: the fitting fails.
: A remedy may be to increase the value
: ␛[1mGDValidEveFrac␛[0m to 1.0 (or a larger value).
: In addition, if ␛[1mGDPathEveFrac␛[0m is too high
: the same warnings may occur since the events
: used for error estimation are also used for
: path estimation.
: Another possibility is to modify the model -
: See above on tuning the rule ensemble.
:
: ␛[1mThe error rate was still decreasing at the end of the path␛[0m
: Too few steps in path! Increase ␛[1mGDNSteps␛[0m.
:
: ␛[1mReached minimum early in the search␛[0m
: Minimum was found early in the fitting. This
: may indicate that the used step size ␛[1mGDStep␛[0m.
: was too large. Reduce it and rerun.
: If the results still are not OK, modify the
: model either by modifying the rule ensemble
: or add/remove linear terms
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
RuleFit : -------------------RULE ENSEMBLE SUMMARY------------------------
: Tree training method : AdaBoost
: Number of events per tree : 2000
: Number of trees : 20
: Number of generated rules : 196
: Idem, after cleanup : 80
: Average number of cuts per rule : 3.01
: Spread in number of cuts per rules : 1.23
: ----------------------------------------------------------------
:
: GD path scan - the scan stops when the max num. of steps is reached or a min is found
: Estimating the cutoff parameter tau. The estimated time is a pessimistic maximum.
: Best path found with tau = 0.0000 after 2.23 sec
: Fitting model...
<WARNING> :
: Minimisation elapsed time : 1.26 sec
: ----------------------------------------------------------------
: Found minimum at step 10000 with error = 0.552378
: Reason for ending loop: end of loop reached
: ----------------------------------------------------------------
: The error rate was still decreasing at the end of the path
: Increase number of steps (GDNSteps).
: Removed 28 out of a total of 80 rules with importance < 0.001
:
: ================================================================
: M o d e l
: ================================================================
RuleFit : Offset (a0) = 9.46803
: ------------------------------------
: Linear model (weights unnormalised)
: ------------------------------------
: Variable : Weights : Importance
: ------------------------------------
: myvar1 : -6.338e-01 : 0.472
: myvar2 : -4.488e-01 : 0.209
: var3 : -2.810e-01 : 0.129
: var4 : 1.850e+00 : 1.000
: ------------------------------------
: Number of rules = 52
: Printing the first 10 rules, ordered in importance.
: Rule 1 : Importance = 0.4294
: Cut 1 : -0.708 < var4
: Rule 2 : Importance = 0.3676
: Cut 1 : var3 < -0.0812
: Rule 3 : Importance = 0.3363
: Cut 1 : -0.0812 < var3
: Rule 4 : Importance = 0.2934
: Cut 1 : -0.877 < var3
: Cut 2 : 0.271 < var4
: Rule 5 : Importance = 0.2706
: Cut 1 : myvar1 < 2.83
: Cut 2 : -1.67 < var3
: Rule 6 : Importance = 0.2387
: Cut 1 : myvar1 < 1.46
: Cut 2 : var4 < 0.271
: Rule 7 : Importance = 0.1904
: Cut 1 : var4 < -0.708
: Rule 8 : Importance = 0.1897
: Cut 1 : var3 < 0.256
: Cut 2 : var4 < -0.708
: Rule 9 : Importance = 0.1689
: Cut 1 : myvar1 < -2.85
: Rule 10 : Importance = 0.1611
: Cut 1 : -2.85 < myvar1 < 2.68
: Skipping the next 42 rules
: ================================================================
:
<WARNING> : No input variable directory found - BUG?
: Elapsed time for training with 2000 events: 3.53 sec
RuleFit : [dataset] : Evaluation of RuleFit on training sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.00393 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_RuleFit.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_RuleFit.class.C␛[0m
: TMVAC.root:/dataset/Method_RuleFit/RuleFit
Factory : Training finished
:
: Ranking input variables (method specific)...
: No variable ranking supplied by classifier: Cuts
: No variable ranking supplied by classifier: CutsD
Likelihood : Ranking result (top variable is best ranked)
: -------------------------------------
: Rank : Variable : Delta Separation
: -------------------------------------
: 1 : var4 : 5.959e-02
: 2 : myvar1 : 3.033e-04
: 3 : myvar2 : -2.045e-02
: 4 : var3 : -2.655e-02
: -------------------------------------
LikelihoodPCA : Ranking result (top variable is best ranked)
: -------------------------------------
: Rank : Variable : Delta Separation
: -------------------------------------
: 1 : var4 : 2.888e-01
: 2 : myvar1 : 6.310e-02
: 3 : var3 : 1.768e-02
: 4 : myvar2 : 1.165e-02
: -------------------------------------
: No variable ranking supplied by classifier: PDERS
PDEFoam : Ranking result (top variable is best ranked)
: ----------------------------------------
: Rank : Variable : Variable Importance
: ----------------------------------------
: 1 : var4 : 3.830e-01
: 2 : myvar1 : 2.979e-01
: 3 : var3 : 1.915e-01
: 4 : myvar2 : 1.277e-01
: ----------------------------------------
: No variable ranking supplied by classifier: KNN
LD : Ranking result (top variable is best ranked)
: ---------------------------------
: Rank : Variable : Discr. power
: ---------------------------------
: 1 : var4 : 7.053e-01
: 2 : myvar1 : 3.094e-01
: 3 : var3 : 1.423e-01
: 4 : myvar2 : 1.019e-01
: ---------------------------------
: No variable ranking supplied by classifier: FDA_GA
MLPBNN : Ranking result (top variable is best ranked)
: -------------------------------
: Rank : Variable : Importance
: -------------------------------
: 1 : var4 : 1.360e+00
: 2 : myvar2 : 1.009e+00
: 3 : myvar1 : 8.834e-01
: 4 : var3 : 3.562e-01
: -------------------------------
: No variable ranking supplied by classifier: DNN_CPU
: No variable ranking supplied by classifier: SVM
BDT : Ranking result (top variable is best ranked)
: ----------------------------------------
: Rank : Variable : Variable Importance
: ----------------------------------------
: 1 : var4 : 2.697e-01
: 2 : myvar1 : 2.467e-01
: 3 : myvar2 : 2.460e-01
: 4 : var3 : 2.377e-01
: ----------------------------------------
RuleFit : Ranking result (top variable is best ranked)
: -------------------------------
: Rank : Variable : Importance
: -------------------------------
: 1 : var4 : 1.000e+00
: 2 : myvar1 : 6.981e-01
: 3 : var3 : 5.947e-01
: 4 : myvar2 : 4.105e-01
: -------------------------------
TH1.Print Name = TrainingHistory_DNN_CPU_trainingError, Entries= 0, Total sum= 15.6309
TH1.Print Name = TrainingHistory_DNN_CPU_valError, Entries= 0, Total sum= 15.3436
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_Cuts.weights.xml␛[0m
: Read cuts optimised using sample of MC events
: Reading 100 signal efficiency bins for 4 variables
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_CutsD.weights.xml␛[0m
: Read cuts optimised using sample of MC events
: Reading 100 signal efficiency bins for 4 variables
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_Likelihood.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_LikelihoodPCA.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_PDERS.weights.xml␛[0m
: signal and background scales: 0.001 0.001
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.weights.xml␛[0m
: Read foams from file: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.weights_foams.root␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_KNN.weights.xml␛[0m
: Creating kd-tree with 2000 events
: Computing scale factor for 1d distributions: (ifrac, bottom, top) = (80%, 10%, 90%)
ModulekNN : Optimizing tree for 4 variables with 2000 values
: <Fill> Class 1 has 1000 events
: <Fill> Class 2 has 1000 events
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_LD.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_FDA_GA.weights.xml␛[0m
: User-defined formula string : "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3"
: TFormula-compatible formula string: "[0]+[1]*[5]+[2]*[6]+[3]*[7]+[4]*[8]"
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_MLPBNN.weights.xml␛[0m
MLPBNN : Building Network.
: Initializing weights
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_DNN_CPU.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_SVM.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_BDT.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_RuleFit.weights.xml␛[0m
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: Cuts for Classification performance
:
Cuts : [dataset] : Evaluation of Cuts on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.00223 sec
Factory : Test method: CutsD for Classification performance
:
CutsD : [dataset] : Evaluation of CutsD on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.00734 sec
Factory : Test method: Likelihood for Classification performance
:
Likelihood : [dataset] : Evaluation of Likelihood on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.0101 sec
Factory : Test method: LikelihoodPCA for Classification performance
:
LikelihoodPCA : [dataset] : Evaluation of LikelihoodPCA on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.0196 sec
Factory : Test method: PDERS for Classification performance
:
PDERS : [dataset] : Evaluation of PDERS on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.921 sec
Factory : Test method: PDEFoam for Classification performance
:
PDEFoam : [dataset] : Evaluation of PDEFoam on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.0745 sec
Factory : Test method: KNN for Classification performance
:
KNN : [dataset] : Evaluation of KNN on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.189 sec
Factory : Test method: LD for Classification performance
:
LD : [dataset] : Evaluation of LD on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.00307 sec
: Dataset[dataset] : Evaluation of LD on testing sample
Factory : Test method: FDA_GA for Classification performance
:
FDA_GA : [dataset] : Evaluation of FDA_GA on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.00319 sec
Factory : Test method: MLPBNN for Classification performance
:
MLPBNN : [dataset] : Evaluation of MLPBNN on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.0173 sec
Factory : Test method: DNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.12216 0.20255 [ -1.0614 1.0246 ]
: myvar2: -0.12333 0.30492 [ -1.2280 0.99911 ]
: var3: 0.097148 0.21347 [ -1.0158 0.99984 ]
: var4: 0.17495 0.23851 [ -1.2661 1.0694 ]
: -----------------------------------------------------------
DNN_CPU : [dataset] : Evaluation of DNN_CPU on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.329 sec
Factory : Test method: SVM for Classification performance
:
SVM : [dataset] : Evaluation of SVM on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.283 sec
Factory : Test method: BDT for Classification performance
:
BDT : [dataset] : Evaluation of BDT on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.56 sec
Factory : Test method: RuleFit for Classification performance
:
RuleFit : [dataset] : Evaluation of RuleFit on testing sample (10000 events)
: Elapsed time for evaluation of 10000 events: 0.0129 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: Cuts
:
<WARNING> : You have asked for histogram MVA_EFF_BvsS which does not seem to exist in *Results* .. better don't use it
<WARNING> : You have asked for histogram EFF_BVSS_TR which does not seem to exist in *Results* .. better don't use it
TFHandler_Cuts : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.21781 1.7248 [ -9.8605 7.9024 ]
: myvar2: -0.062175 1.1106 [ -4.0854 4.0259 ]
: var3: 0.16451 1.0589 [ -5.3563 4.6422 ]
: var4: 0.43566 1.2253 [ -6.9675 5.0307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: CutsD
:
<WARNING> : You have asked for histogram MVA_EFF_BvsS which does not seem to exist in *Results* .. better don't use it
TFHandler_CutsD : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.14555 1.0166 [ -5.5736 5.0206 ]
: myvar2: -0.093417 1.0353 [ -3.8442 3.7856 ]
: var3: -0.096857 1.0078 [ -4.5469 4.5058 ]
: var4: 0.65748 0.95864 [ -4.0893 3.7760 ]
: -----------------------------------------------------------
<WARNING> : You have asked for histogram EFF_BVSS_TR which does not seem to exist in *Results* .. better don't use it
TFHandler_CutsD : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.17586 1.0000 [ -5.6401 4.8529 ]
: myvar2: 0.026952 1.0000 [ -2.9292 3.7065 ]
: var3: -0.11549 1.0000 [ -4.1792 3.5180 ]
: var4: 0.34819 1.0000 [ -3.3363 3.3963 ]
: -----------------------------------------------------------
TFHandler_CutsD : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: -0.14555 1.0166 [ -5.5736 5.0206 ]
: myvar2: -0.093417 1.0353 [ -3.8442 3.7856 ]
: var3: -0.096857 1.0078 [ -4.5469 4.5058 ]
: var4: 0.65748 0.95864 [ -4.0893 3.7760 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: Likelihood
:
Likelihood : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_Likelihood : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.21781 1.7248 [ -9.8605 7.9024 ]
: myvar2: -0.062175 1.1106 [ -4.0854 4.0259 ]
: var3: 0.16451 1.0589 [ -5.3563 4.6422 ]
: var4: 0.43566 1.2253 [ -6.9675 5.0307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: LikelihoodPCA
:
TFHandler_LikelihoodPCA : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 1.1147 2.2628 [ -12.508 10.719 ]
: myvar2: -0.25554 1.1225 [ -4.1578 3.8995 ]
: var3: -0.19401 0.58225 [ -2.2950 1.8880 ]
: var4: -0.32038 0.33412 [ -1.3929 0.88819 ]
: -----------------------------------------------------------
LikelihoodPCA : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_LikelihoodPCA : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 1.1147 2.2628 [ -12.508 10.719 ]
: myvar2: -0.25554 1.1225 [ -4.1578 3.8995 ]
: var3: -0.19401 0.58225 [ -2.2950 1.8880 ]
: var4: -0.32038 0.33412 [ -1.3929 0.88819 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: PDERS
:
PDERS : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_PDERS : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.21781 1.7248 [ -9.8605 7.9024 ]
: myvar2: -0.062175 1.1106 [ -4.0854 4.0259 ]
: var3: 0.16451 1.0589 [ -5.3563 4.6422 ]
: var4: 0.43566 1.2253 [ -6.9675 5.0307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: PDEFoam
:
PDEFoam : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_PDEFoam : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.21781 1.7248 [ -9.8605 7.9024 ]
: myvar2: -0.062175 1.1106 [ -4.0854 4.0259 ]
: var3: 0.16451 1.0589 [ -5.3563 4.6422 ]
: var4: 0.43566 1.2253 [ -6.9675 5.0307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: KNN
:
KNN : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_KNN : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.21781 1.7248 [ -9.8605 7.9024 ]
: myvar2: -0.062175 1.1106 [ -4.0854 4.0259 ]
: var3: 0.16451 1.0589 [ -5.3563 4.6422 ]
: var4: 0.43566 1.2253 [ -6.9675 5.0307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: LD
:
LD : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Also filling probability and rarity histograms (on request)...
TFHandler_LD : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.21781 1.7248 [ -9.8605 7.9024 ]
: myvar2: -0.062175 1.1106 [ -4.0854 4.0259 ]
: var3: 0.16451 1.0589 [ -5.3563 4.6422 ]
: var4: 0.43566 1.2253 [ -6.9675 5.0307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: FDA_GA
:
FDA_GA : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_FDA_GA : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.21781 1.7248 [ -9.8605 7.9024 ]
: myvar2: -0.062175 1.1106 [ -4.0854 4.0259 ]
: var3: 0.16451 1.0589 [ -5.3563 4.6422 ]
: var4: 0.43566 1.2253 [ -6.9675 5.0307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: MLPBNN
:
TFHandler_MLPBNN : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.12216 0.20255 [ -1.0614 1.0246 ]
: myvar2: -0.12333 0.30492 [ -1.2280 0.99911 ]
: var3: 0.097148 0.21347 [ -1.0158 0.99984 ]
: var4: 0.17495 0.23851 [ -1.2661 1.0694 ]
: -----------------------------------------------------------
MLPBNN : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_MLPBNN : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.12216 0.20255 [ -1.0614 1.0246 ]
: myvar2: -0.12333 0.30492 [ -1.2280 0.99911 ]
: var3: 0.097148 0.21347 [ -1.0158 0.99984 ]
: var4: 0.17495 0.23851 [ -1.2661 1.0694 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: DNN_CPU
:
DNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.089214 0.20183 [ -1.0000 1.0000 ]
: myvar2: -0.090751 0.29609 [ -1.0000 1.0000 ]
: var3: 0.059878 0.21436 [ -1.0000 1.0000 ]
: var4: 0.11587 0.24261 [ -1.0000 1.0000 ]
: -----------------------------------------------------------
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.12216 0.20255 [ -1.0614 1.0246 ]
: myvar2: -0.12333 0.30492 [ -1.2280 0.99911 ]
: var3: 0.097148 0.21347 [ -1.0158 0.99984 ]
: var4: 0.17495 0.23851 [ -1.2661 1.0694 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: SVM
:
TFHandler_SVM : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.12216 0.20255 [ -1.0614 1.0246 ]
: myvar2: -0.12333 0.30492 [ -1.2280 0.99911 ]
: var3: 0.097148 0.21347 [ -1.0158 0.99984 ]
: var4: 0.17495 0.23851 [ -1.2661 1.0694 ]
: -----------------------------------------------------------
SVM : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_SVM : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.12216 0.20255 [ -1.0614 1.0246 ]
: myvar2: -0.12333 0.30492 [ -1.2280 0.99911 ]
: var3: 0.097148 0.21347 [ -1.0158 0.99984 ]
: var4: 0.17495 0.23851 [ -1.2661 1.0694 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: BDT
:
BDT : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_BDT : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.21781 1.7248 [ -9.8605 7.9024 ]
: myvar2: -0.062175 1.1106 [ -4.0854 4.0259 ]
: var3: 0.16451 1.0589 [ -5.3563 4.6422 ]
: var4: 0.43566 1.2253 [ -6.9675 5.0307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: RuleFit
:
RuleFit : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_RuleFit : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: myvar1: 0.21781 1.7248 [ -9.8605 7.9024 ]
: myvar2: -0.062175 1.1106 [ -4.0854 4.0259 ]
: var3: 0.16451 1.0589 [ -5.3563 4.6422 ]
: var4: 0.43566 1.2253 [ -6.9675 5.0307 ]
: -----------------------------------------------------------
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset LD : 0.921
: dataset DNN_CPU : 0.921
: dataset MLPBNN : 0.919
: dataset LikelihoodPCA : 0.913
: dataset CutsD : 0.908
: dataset FDA_GA : 0.901
: dataset SVM : 0.898
: dataset RuleFit : 0.881
: dataset BDT : 0.881
: dataset KNN : 0.838
: dataset PDEFoam : 0.822
: dataset PDERS : 0.797
: dataset Cuts : 0.792
: dataset Likelihood : 0.760
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset LD : 0.364 (0.438) 0.781 (0.758) 0.929 (0.920)
: dataset DNN_CPU : 0.353 (0.439) 0.777 (0.757) 0.929 (0.919)
: dataset MLPBNN : 0.343 (0.432) 0.777 (0.768) 0.926 (0.920)
: dataset LikelihoodPCA : 0.288 (0.316) 0.756 (0.729) 0.920 (0.913)
: dataset CutsD : 0.262 (0.449) 0.735 (0.709) 0.914 (0.890)
: dataset FDA_GA : 0.298 (0.342) 0.713 (0.674) 0.899 (0.897)
: dataset SVM : 0.321 (0.332) 0.711 (0.725) 0.894 (0.898)
: dataset RuleFit : 0.075 (0.077) 0.667 (0.718) 0.893 (0.896)
: dataset BDT : 0.275 (0.402) 0.661 (0.731) 0.870 (0.899)
: dataset KNN : 0.195 (0.252) 0.561 (0.642) 0.810 (0.843)
: dataset PDEFoam : 0.173 (0.219) 0.499 (0.541) 0.761 (0.773)
: dataset PDERS : 0.158 (0.171) 0.465 (0.492) 0.750 (0.756)
: dataset Cuts : 0.112 (0.133) 0.444 (0.496) 0.741 (0.758)
: dataset Likelihood : 0.082 (0.096) 0.388 (0.415) 0.690 (0.695)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 10000 events
:
Dataset:dataset : Created tree 'TrainTree' with 2000 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m
==> Wrote root file: TMVAC.root
==> TMVAClassification is done!
(int) 0
#include <cstdlib>
#include <iostream>
#include <map>
#include <string>
#include "TChain.h"
#include "TFile.h"
#include "TTree.h"
#include "TString.h"
#include "TObjString.h"
#include "TSystem.h"
#include "TROOT.h"
#include "TMVA/Factory.h"
#include "TMVA/Tools.h"
#include "TMVA/TMVAGui.h"
int TMVAClassification( TString myMethodList = "" )
{
// The explicit loading of the shared libTMVA is done in TMVAlogon.C, defined in .rootrc
// if you use your private .rootrc, or run from a different directory, please copy the
// corresponding lines from .rootrc
// Methods to be processed can be given as an argument; use format:
//
// mylinux~> root -l TMVAClassification.C\‍(\"myMethod1,myMethod2,myMethod3\"\‍)
//---------------------------------------------------------------
// This loads the library
// Default MVA methods to be trained + tested
std::map<std::string,int> Use;
// Cut optimisation
Use["Cuts"] = 1;
Use["CutsD"] = 1;
Use["CutsPCA"] = 0;
Use["CutsGA"] = 0;
Use["CutsSA"] = 0;
//
// 1-dimensional likelihood ("naive Bayes estimator")
Use["Likelihood"] = 1;
Use["LikelihoodD"] = 0; // the "D" extension indicates decorrelated input variables (see option strings)
Use["LikelihoodPCA"] = 1; // the "PCA" extension indicates PCA-transformed input variables (see option strings)
Use["LikelihoodKDE"] = 0;
Use["LikelihoodMIX"] = 0;
//
// Mutidimensional likelihood and Nearest-Neighbour methods
Use["PDERS"] = 1;
Use["PDERSD"] = 0;
Use["PDERSPCA"] = 0;
Use["PDEFoam"] = 1;
Use["PDEFoamBoost"] = 0; // uses generalised MVA method boosting
Use["KNN"] = 1; // k-nearest neighbour method
//
// Linear Discriminant Analysis
Use["LD"] = 1; // Linear Discriminant identical to Fisher
Use["Fisher"] = 0;
Use["FisherG"] = 0;
Use["BoostedFisher"] = 0; // uses generalised MVA method boosting
Use["HMatrix"] = 0;
//
// Function Discriminant analysis
Use["FDA_GA"] = 1; // minimisation of user-defined function using Genetics Algorithm
Use["FDA_SA"] = 0;
Use["FDA_MC"] = 0;
Use["FDA_MT"] = 0;
Use["FDA_GAMT"] = 0;
Use["FDA_MCMT"] = 0;
//
// Neural Networks (all are feed-forward Multilayer Perceptrons)
Use["MLP"] = 0; // Recommended ANN
Use["MLPBFGS"] = 0; // Recommended ANN with optional training method
Use["MLPBNN"] = 1; // Recommended ANN with BFGS training method and bayesian regulator
Use["CFMlpANN"] = 0; // Depreciated ANN from ALEPH
Use["TMlpANN"] = 0; // ROOT's own ANN
#ifdef R__HAS_TMVAGPU
Use["DNN_GPU"] = 1; // CUDA-accelerated DNN training.
#else
Use["DNN_GPU"] = 0;
#endif
#ifdef R__HAS_TMVACPU
Use["DNN_CPU"] = 1; // Multi-core accelerated DNN.
#else
Use["DNN_CPU"] = 0;
#endif
//
// Support Vector Machine
Use["SVM"] = 1;
//
// Boosted Decision Trees
Use["BDT"] = 1; // uses Adaptive Boost
Use["BDTG"] = 0; // uses Gradient Boost
Use["BDTB"] = 0; // uses Bagging
Use["BDTD"] = 0; // decorrelation + Adaptive Boost
Use["BDTF"] = 0; // allow usage of fisher discriminant for node splitting
//
// Friedman's RuleFit method, ie, an optimised series of cuts ("rules")
Use["RuleFit"] = 1;
// ---------------------------------------------------------------
std::cout << std::endl;
std::cout << "==> Start TMVAClassification" << std::endl;
// Select methods (don't look at this code - not of interest)
if (myMethodList != "") {
for (std::map<std::string,int>::iterator it = Use.begin(); it != Use.end(); it++) it->second = 0;
std::vector<TString> mlist = TMVA::gTools().SplitString( myMethodList, ',' );
for (UInt_t i=0; i<mlist.size(); i++) {
std::string regMethod(mlist[i]);
if (Use.find(regMethod) == Use.end()) {
std::cout << "Method \"" << regMethod << "\" not known in TMVA under this name. Choose among the following:" << std::endl;
for (std::map<std::string,int>::iterator it = Use.begin(); it != Use.end(); it++) std::cout << it->first << " ";
std::cout << std::endl;
return 1;
}
Use[regMethod] = 1;
}
}
// --------------------------------------------------------------------------------------------------
// Here the preparation phase begins
// Read training and test data
// (it is also possible to use ASCII format as input -> see TMVA Users Guide)
// Set the cache directory for the TFile to the current directory. The input
// data file will be downloaded here if not present yet, then it will be read
// from the cache path directly.
std::unique_ptr<TFile> input{TFile::Open("http://root.cern/files/tmva_class_example.root", "CACHEREAD")};
if (!input || input->IsZombie()) {
throw std::runtime_error("ERROR: could not open data file");
}
std::cout << "--- TMVAClassification : Using input file: " << input->GetName() << std::endl;
// Register the training and test trees
TTree *signalTree = (TTree*)input->Get("TreeS");
TTree *background = (TTree*)input->Get("TreeB");
// Create a ROOT output file where TMVA will store ntuples, histograms, etc.
TString outfileName("TMVAC.root");
std::unique_ptr<TFile> outputFile{TFile::Open(outfileName, "RECREATE")};
if (!outputFile || outputFile->IsZombie()) {
throw std::runtime_error("ERROR: could not open output file");
}
// Create the factory object. Later you can choose the methods
// whose performance you'd like to investigate. The factory is
// the only TMVA object you have to interact with
//
// The first argument is the base of the name of all the
// weightfiles in the directory weight/
//
// The second argument is the output file for the training results
// All TMVA output can be suppressed by removing the "!" (not) in
// front of the "Silent" argument in the option string
auto factory = std::make_unique<TMVA::Factory>(
"TMVAClassification", outputFile.get(),
"!V:!Silent:Color:DrawProgressBar:Transformations=I;D;P;G,D:AnalysisType=Classification");
auto dataloader_raii = std::make_unique<TMVA::DataLoader>("dataset");
auto *dataloader = dataloader_raii.get();
// If you wish to modify default settings
// (please check "src/Config.h" to see all available global options)
//
// (TMVA::gConfig().GetVariablePlotting()).fTimesRMS = 8.0;
// (TMVA::gConfig().GetIONames()).fWeightFileDir = "myWeightDirectory";
// Define the input variables that shall be used for the MVA training
// note that you may also use variable expressions, such as: "3*var1/var2*abs(var3)"
// [all types of expressions that can also be parsed by TTree::Draw( "expression" )]
dataloader->AddVariable( "myvar1 := var1+var2", 'F' );
dataloader->AddVariable( "myvar2 := var1-var2", "Expression 2", "", 'F' );
dataloader->AddVariable( "var3", "Variable 3", "units", 'F' );
dataloader->AddVariable( "var4", "Variable 4", "units", 'F' );
// You can add so-called "Spectator variables", which are not used in the MVA training,
// but will appear in the final "TestTree" produced by TMVA. This TestTree will contain the
// input variables, the response values of all trained MVAs, and the spectator variables
dataloader->AddSpectator( "spec1 := var1*2", "Spectator 1", "units", 'F' );
dataloader->AddSpectator( "spec2 := var1*3", "Spectator 2", "units", 'F' );
// global event weights per tree (see below for setting event-wise weights)
Double_t signalWeight = 1.0;
Double_t backgroundWeight = 1.0;
// You can add an arbitrary number of signal or background trees
dataloader->AddSignalTree ( signalTree, signalWeight );
dataloader->AddBackgroundTree( background, backgroundWeight );
// To give different trees for training and testing, do as follows:
//
// dataloader->AddSignalTree( signalTrainingTree, signalTrainWeight, "Training" );
// dataloader->AddSignalTree( signalTestTree, signalTestWeight, "Test" );
// Use the following code instead of the above two or four lines to add signal and background
// training and test events "by hand"
// NOTE that in this case one should not give expressions (such as "var1+var2") in the input
// variable definition, but simply compute the expression before adding the event
// ```cpp
// // --- begin ----------------------------------------------------------
// std::vector<Double_t> vars( 4 ); // vector has size of number of input variables
// Float_t treevars[4], weight;
//
// // Signal
// for (UInt_t ivar=0; ivar<4; ivar++) signalTree->SetBranchAddress( Form( "var%i", ivar+1 ), &(treevars[ivar]) );
// for (UInt_t i=0; i<signalTree->GetEntries(); i++) {
// signalTree->GetEntry(i);
// for (UInt_t ivar=0; ivar<4; ivar++) vars[ivar] = treevars[ivar];
// // add training and test events; here: first half is training, second is testing
// // note that the weight can also be event-wise
// if (i < signalTree->GetEntries()/2.0) dataloader->AddSignalTrainingEvent( vars, signalWeight );
// else dataloader->AddSignalTestEvent ( vars, signalWeight );
// }
//
// // Background (has event weights)
// background->SetBranchAddress( "weight", &weight );
// for (UInt_t ivar=0; ivar<4; ivar++) background->SetBranchAddress( Form( "var%i", ivar+1 ), &(treevars[ivar]) );
// for (UInt_t i=0; i<background->GetEntries(); i++) {
// background->GetEntry(i);
// for (UInt_t ivar=0; ivar<4; ivar++) vars[ivar] = treevars[ivar];
// // add training and test events; here: first half is training, second is testing
// // note that the weight can also be event-wise
// if (i < background->GetEntries()/2) dataloader->AddBackgroundTrainingEvent( vars, backgroundWeight*weight );
// else dataloader->AddBackgroundTestEvent ( vars, backgroundWeight*weight );
// }
// // --- end ------------------------------------------------------------
// ```
// End of tree registration
// Set individual event weights (the variables must exist in the original TTree)
// - for signal : `dataloader->SetSignalWeightExpression ("weight1*weight2");`
// - for background: `dataloader->SetBackgroundWeightExpression("weight1*weight2");`
dataloader->SetBackgroundWeightExpression( "weight" );
// Apply additional cuts on the signal and background samples (can be different)
TCut mycuts = ""; // for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1";
TCut mycutb = ""; // for example: TCut mycutb = "abs(var1)<0.5";
// Tell the dataloader how to use the training and testing events
//
// If no numbers of events are given, half of the events in the tree are used
// for training, and the other half for testing:
//
// dataloader->PrepareTrainingAndTestTree( mycut, "SplitMode=random:!V" );
//
// To also specify the number of testing events, use:
//
// dataloader->PrepareTrainingAndTestTree( mycut,
// "NSigTrain=3000:NBkgTrain=3000:NSigTest=3000:NBkgTest=3000:SplitMode=Random:!V" );
dataloader->PrepareTrainingAndTestTree( mycuts, mycutb,
"nTrain_Signal=1000:nTrain_Background=1000:SplitMode=Random:NormMode=NumEvents:!V" );
// ### Book MVA methods
//
// Please lookup the various method configuration options in the corresponding cxx files, eg:
// src/MethoCuts.cxx, etc, or here: http://tmva.sourceforge.net/old_site/optionRef.html
// it is possible to preset ranges in the option string in which the cut optimisation should be done:
// "...:CutRangeMin[2]=-1:CutRangeMax[2]=1"...", where [2] is the third input variable
// Cut optimisation
if (Use["Cuts"])
factory->BookMethod( dataloader, TMVA::Types::kCuts, "Cuts",
"!H:!V:FitMethod=MC:EffSel:SampleSize=200000:VarProp=FSmart" );
if (Use["CutsD"])
factory->BookMethod( dataloader, TMVA::Types::kCuts, "CutsD",
"!H:!V:FitMethod=MC:EffSel:SampleSize=200000:VarProp=FSmart:VarTransform=Decorrelate" );
if (Use["CutsPCA"])
factory->BookMethod( dataloader, TMVA::Types::kCuts, "CutsPCA",
"!H:!V:FitMethod=MC:EffSel:SampleSize=200000:VarProp=FSmart:VarTransform=PCA" );
if (Use["CutsGA"])
factory->BookMethod( dataloader, TMVA::Types::kCuts, "CutsGA",
"H:!V:FitMethod=GA:CutRangeMin[0]=-10:CutRangeMax[0]=10:VarProp[1]=FMax:EffSel:Steps=30:Cycles=3:PopSize=400:SC_steps=10:SC_rate=5:SC_factor=0.95" );
if (Use["CutsSA"])
factory->BookMethod( dataloader, TMVA::Types::kCuts, "CutsSA",
"!H:!V:FitMethod=SA:EffSel:MaxCalls=150000:KernelTemp=IncAdaptive:InitialTemp=1e+6:MinTemp=1e-6:Eps=1e-10:UseDefaultScale" );
// Likelihood ("naive Bayes estimator")
if (Use["Likelihood"])
factory->BookMethod( dataloader, TMVA::Types::kLikelihood, "Likelihood",
"H:!V:TransformOutput:PDFInterpol=Spline2:NSmoothSig[0]=20:NSmoothBkg[0]=20:NSmoothBkg[1]=10:NSmooth=1:NAvEvtPerBin=50" );
// Decorrelated likelihood
if (Use["LikelihoodD"])
factory->BookMethod( dataloader, TMVA::Types::kLikelihood, "LikelihoodD",
"!H:!V:TransformOutput:PDFInterpol=Spline2:NSmoothSig[0]=20:NSmoothBkg[0]=20:NSmooth=5:NAvEvtPerBin=50:VarTransform=Decorrelate" );
// PCA-transformed likelihood
if (Use["LikelihoodPCA"])
factory->BookMethod( dataloader, TMVA::Types::kLikelihood, "LikelihoodPCA",
"!H:!V:!TransformOutput:PDFInterpol=Spline2:NSmoothSig[0]=20:NSmoothBkg[0]=20:NSmooth=5:NAvEvtPerBin=50:VarTransform=PCA" );
// Use a kernel density estimator to approximate the PDFs
if (Use["LikelihoodKDE"])
factory->BookMethod( dataloader, TMVA::Types::kLikelihood, "LikelihoodKDE",
"!H:!V:!TransformOutput:PDFInterpol=KDE:KDEtype=Gauss:KDEiter=Adaptive:KDEFineFactor=0.3:KDEborder=None:NAvEvtPerBin=50" );
// Use a variable-dependent mix of splines and kernel density estimator
if (Use["LikelihoodMIX"])
factory->BookMethod( dataloader, TMVA::Types::kLikelihood, "LikelihoodMIX",
"!H:!V:!TransformOutput:PDFInterpolSig[0]=KDE:PDFInterpolBkg[0]=KDE:PDFInterpolSig[1]=KDE:PDFInterpolBkg[1]=KDE:PDFInterpolSig[2]=Spline2:PDFInterpolBkg[2]=Spline2:PDFInterpolSig[3]=Spline2:PDFInterpolBkg[3]=Spline2:KDEtype=Gauss:KDEiter=Nonadaptive:KDEborder=None:NAvEvtPerBin=50" );
// Test the multi-dimensional probability density estimator
// here are the options strings for the MinMax and RMS methods, respectively:
//
// "!H:!V:VolumeRangeMode=MinMax:DeltaFrac=0.2:KernelEstimator=Gauss:GaussSigma=0.3" );
// "!H:!V:VolumeRangeMode=RMS:DeltaFrac=3:KernelEstimator=Gauss:GaussSigma=0.3" );
if (Use["PDERS"])
factory->BookMethod( dataloader, TMVA::Types::kPDERS, "PDERS",
"!H:!V:NormTree=T:VolumeRangeMode=Adaptive:KernelEstimator=Gauss:GaussSigma=0.3:NEventsMin=400:NEventsMax=600" );
if (Use["PDERSD"])
factory->BookMethod( dataloader, TMVA::Types::kPDERS, "PDERSD",
"!H:!V:VolumeRangeMode=Adaptive:KernelEstimator=Gauss:GaussSigma=0.3:NEventsMin=400:NEventsMax=600:VarTransform=Decorrelate" );
if (Use["PDERSPCA"])
factory->BookMethod( dataloader, TMVA::Types::kPDERS, "PDERSPCA",
"!H:!V:VolumeRangeMode=Adaptive:KernelEstimator=Gauss:GaussSigma=0.3:NEventsMin=400:NEventsMax=600:VarTransform=PCA" );
// Multi-dimensional likelihood estimator using self-adapting phase-space binning
if (Use["PDEFoam"])
factory->BookMethod( dataloader, TMVA::Types::kPDEFoam, "PDEFoam",
"!H:!V:SigBgSeparate=F:TailCut=0.001:VolFrac=0.0666:nActiveCells=500:nSampl=2000:nBin=5:Nmin=100:Kernel=None:Compress=T" );
if (Use["PDEFoamBoost"])
factory->BookMethod( dataloader, TMVA::Types::kPDEFoam, "PDEFoamBoost",
"!H:!V:Boost_Num=30:Boost_Transform=linear:SigBgSeparate=F:MaxDepth=4:UseYesNoCell=T:DTLogic=MisClassificationError:FillFoamWithOrigWeights=F:TailCut=0:nActiveCells=500:nBin=20:Nmin=400:Kernel=None:Compress=T" );
// K-Nearest Neighbour classifier (KNN)
if (Use["KNN"])
factory->BookMethod( dataloader, TMVA::Types::kKNN, "KNN",
"H:nkNN=20:ScaleFrac=0.8:SigmaFact=1.0:Kernel=Gaus:UseKernel=F:UseWeight=T:!Trim" );
// H-Matrix (chi2-squared) method
if (Use["HMatrix"])
factory->BookMethod( dataloader, TMVA::Types::kHMatrix, "HMatrix", "!H:!V:VarTransform=None" );
// Linear discriminant (same as Fisher discriminant)
if (Use["LD"])
factory->BookMethod( dataloader, TMVA::Types::kLD, "LD", "H:!V:VarTransform=None:CreateMVAPdfs:PDFInterpolMVAPdf=Spline2:NbinsMVAPdf=50:NsmoothMVAPdf=10" );
// Fisher discriminant (same as LD)
if (Use["Fisher"])
factory->BookMethod( dataloader, TMVA::Types::kFisher, "Fisher", "H:!V:Fisher:VarTransform=None:CreateMVAPdfs:PDFInterpolMVAPdf=Spline2:NbinsMVAPdf=50:NsmoothMVAPdf=10" );
// Fisher with Gauss-transformed input variables
if (Use["FisherG"])
factory->BookMethod( dataloader, TMVA::Types::kFisher, "FisherG", "H:!V:VarTransform=Gauss" );
// Composite classifier: ensemble (tree) of boosted Fisher classifiers
if (Use["BoostedFisher"])
factory->BookMethod( dataloader, TMVA::Types::kFisher, "BoostedFisher",
"H:!V:Boost_Num=20:Boost_Transform=log:Boost_Type=AdaBoost:Boost_AdaBoostBeta=0.2:!Boost_DetailedMonitoring" );
// Function discrimination analysis (FDA) -- test of various fitters - the recommended one is Minuit (or GA or SA)
if (Use["FDA_MC"])
factory->BookMethod( dataloader, TMVA::Types::kFDA, "FDA_MC",
"H:!V:Formula=(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3:ParRanges=(-1,1);(-10,10);(-10,10);(-10,10);(-10,10):FitMethod=MC:SampleSize=100000:Sigma=0.1" );
if (Use["FDA_GA"]) // can also use Simulated Annealing (SA) algorithm (see Cuts_SA options])
factory->BookMethod( dataloader, TMVA::Types::kFDA, "FDA_GA",
"H:!V:Formula=(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3:ParRanges=(-1,1);(-10,10);(-10,10);(-10,10);(-10,10):FitMethod=GA:PopSize=100:Cycles=2:Steps=5:Trim=True:SaveBestGen=1" );
if (Use["FDA_SA"]) // can also use Simulated Annealing (SA) algorithm (see Cuts_SA options])
factory->BookMethod( dataloader, TMVA::Types::kFDA, "FDA_SA",
"H:!V:Formula=(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3:ParRanges=(-1,1);(-10,10);(-10,10);(-10,10);(-10,10):FitMethod=SA:MaxCalls=15000:KernelTemp=IncAdaptive:InitialTemp=1e+6:MinTemp=1e-6:Eps=1e-10:UseDefaultScale" );
if (Use["FDA_MT"])
factory->BookMethod( dataloader, TMVA::Types::kFDA, "FDA_MT",
"H:!V:Formula=(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3:ParRanges=(-1,1);(-10,10);(-10,10);(-10,10);(-10,10):FitMethod=MINUIT:ErrorLevel=1:PrintLevel=-1:FitStrategy=2:UseImprove:UseMinos:SetBatch" );
if (Use["FDA_GAMT"])
factory->BookMethod( dataloader, TMVA::Types::kFDA, "FDA_GAMT",
"H:!V:Formula=(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3:ParRanges=(-1,1);(-10,10);(-10,10);(-10,10);(-10,10):FitMethod=GA:Converger=MINUIT:ErrorLevel=1:PrintLevel=-1:FitStrategy=0:!UseImprove:!UseMinos:SetBatch:Cycles=1:PopSize=5:Steps=5:Trim" );
if (Use["FDA_MCMT"])
factory->BookMethod( dataloader, TMVA::Types::kFDA, "FDA_MCMT",
"H:!V:Formula=(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3:ParRanges=(-1,1);(-10,10);(-10,10);(-10,10);(-10,10):FitMethod=MC:Converger=MINUIT:ErrorLevel=1:PrintLevel=-1:FitStrategy=0:!UseImprove:!UseMinos:SetBatch:SampleSize=20" );
// TMVA ANN: MLP (recommended ANN) -- all ANNs in TMVA are Multilayer Perceptrons
if (Use["MLP"])
factory->BookMethod( dataloader, TMVA::Types::kMLP, "MLP", "H:!V:NeuronType=tanh:VarTransform=N:NCycles=600:HiddenLayers=N+5:TestRate=5:!UseRegulator" );
if (Use["MLPBFGS"])
factory->BookMethod( dataloader, TMVA::Types::kMLP, "MLPBFGS", "H:!V:NeuronType=tanh:VarTransform=N:NCycles=600:HiddenLayers=N+5:TestRate=5:TrainingMethod=BFGS:!UseRegulator" );
if (Use["MLPBNN"])
factory->BookMethod( dataloader, TMVA::Types::kMLP, "MLPBNN", "H:!V:NeuronType=tanh:VarTransform=N:NCycles=60:HiddenLayers=N+5:TestRate=5:TrainingMethod=BFGS:UseRegulator" ); // BFGS training with bayesian regulators
// Multi-architecture DNN implementation.
if (Use["DNN_CPU"] or Use["DNN_GPU"]) {
// General layout.
TString layoutString ("Layout=TANH|128,TANH|128,TANH|128,LINEAR");
// Define Training strategy. One could define multiple strategy string separated by the "|" delimiter
TString trainingStrategyString = ("TrainingStrategy=LearningRate=1e-2,Momentum=0.9,"
"ConvergenceSteps=20,BatchSize=100,TestRepetitions=1,"
"WeightDecay=1e-4,Regularization=None,"
"DropConfig=0.0+0.5+0.5+0.5");
// General Options.
TString dnnOptions ("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=N:"
"WeightInitialization=XAVIERUNIFORM");
dnnOptions.Append (":"); dnnOptions.Append (layoutString);
dnnOptions.Append (":"); dnnOptions.Append (trainingStrategyString);
// Cuda implementation.
if (Use["DNN_GPU"]) {
TString gpuOptions = dnnOptions + ":Architecture=GPU";
factory->BookMethod(dataloader, TMVA::Types::kDL, "DNN_GPU", gpuOptions);
}
// Multi-core CPU implementation.
if (Use["DNN_CPU"]) {
TString cpuOptions = dnnOptions + ":Architecture=CPU";
factory->BookMethod(dataloader, TMVA::Types::kDL, "DNN_CPU", cpuOptions);
}
}
// CF(Clermont-Ferrand)ANN
if (Use["CFMlpANN"])
factory->BookMethod( dataloader, TMVA::Types::kCFMlpANN, "CFMlpANN", "!H:!V:NCycles=200:HiddenLayers=N+1,N" ); // n_cycles:#nodes:#nodes:...
// Tmlp(Root)ANN
if (Use["TMlpANN"])
factory->BookMethod( dataloader, TMVA::Types::kTMlpANN, "TMlpANN", "!H:!V:NCycles=200:HiddenLayers=N+1,N:LearningMethod=BFGS:ValidationFraction=0.3" ); // n_cycles:#nodes:#nodes:...
// Support Vector Machine
if (Use["SVM"])
factory->BookMethod( dataloader, TMVA::Types::kSVM, "SVM", "Gamma=0.25:Tol=0.001:VarTransform=Norm" );
// Boosted Decision Trees
if (Use["BDTG"]) // Gradient Boost
factory->BookMethod( dataloader, TMVA::Types::kBDT, "BDTG",
"!H:!V:NTrees=1000:MinNodeSize=2.5%:BoostType=Grad:Shrinkage=0.10:UseBaggedBoost:BaggedSampleFraction=0.5:nCuts=20:MaxDepth=2" );
if (Use["BDT"]) // Adaptive Boost
factory->BookMethod( dataloader, TMVA::Types::kBDT, "BDT",
"!H:!V:NTrees=850:MinNodeSize=2.5%:MaxDepth=3:BoostType=AdaBoost:AdaBoostBeta=0.5:UseBaggedBoost:BaggedSampleFraction=0.5:SeparationType=GiniIndex:nCuts=20" );
if (Use["BDTB"]) // Bagging
factory->BookMethod( dataloader, TMVA::Types::kBDT, "BDTB",
"!H:!V:NTrees=400:BoostType=Bagging:SeparationType=GiniIndex:nCuts=20" );
if (Use["BDTD"]) // Decorrelation + Adaptive Boost
factory->BookMethod( dataloader, TMVA::Types::kBDT, "BDTD",
"!H:!V:NTrees=400:MinNodeSize=5%:MaxDepth=3:BoostType=AdaBoost:SeparationType=GiniIndex:nCuts=20:VarTransform=Decorrelate" );
if (Use["BDTF"]) // Allow Using Fisher discriminant in node splitting for (strong) linearly correlated variables
factory->BookMethod( dataloader, TMVA::Types::kBDT, "BDTF",
"!H:!V:NTrees=50:MinNodeSize=2.5%:UseFisherCuts:MaxDepth=3:BoostType=AdaBoost:AdaBoostBeta=0.5:SeparationType=GiniIndex:nCuts=20" );
// RuleFit -- TMVA implementation of Friedman's method
if (Use["RuleFit"])
factory->BookMethod( dataloader, TMVA::Types::kRuleFit, "RuleFit",
"H:!V:RuleFitModule=RFTMVA:Model=ModRuleLinear:MinImp=0.001:RuleMinDist=0.001:NTrees=20:fEventsMin=0.01:fEventsMax=0.5:GDTau=-1.0:GDTauPrec=0.01:GDStep=0.01:GDNSteps=10000:GDErrScale=1.02" );
// For an example of the category classifier usage, see: TMVAClassificationCategory
//
// --------------------------------------------------------------------------------------------------
// Now you can optimize the setting (configuration) of the MVAs using the set of training events
// STILL EXPERIMENTAL and only implemented for BDT's !
//
// factory->OptimizeAllMethods("SigEffAtBkg0.01","Scan");
// factory->OptimizeAllMethods("ROCIntegral","FitGA");
//
// --------------------------------------------------------------------------------------------------
// Now you can tell the factory to train, test, and evaluate the MVAs
//
// Train MVAs using the set of training events
factory->TrainAllMethods();
// Evaluate all MVAs using the set of test events
factory->TestAllMethods();
// Evaluate and compare performance of all configured MVAs
factory->EvaluateAllMethods();
// --------------------------------------------------------------
// Save the output
outputFile->Write();
std::cout << "==> Wrote root file: " << outputFile->GetName() << std::endl;
std::cout << "==> TMVAClassification is done!" << std::endl;
// Launch the GUI for the root macros
if (!gROOT->IsBatch()) TMVA::TMVAGui( outfileName );
return 0;
}
int main( int argc, char** argv )
{
// Select methods (don't look at this code - not of interest)
TString methodList;
for (int i=1; i<argc; i++) {
TString regMethod(argv[i]);
if(regMethod=="-b" || regMethod=="--batch") continue;
if (!methodList.IsNull()) methodList += TString(",");
methodList += regMethod;
}
return TMVAClassification(methodList);
}
int main()
Definition Prototype.cxx:12
unsigned int UInt_t
Definition RtypesCore.h:46
double Double_t
Definition RtypesCore.h:59
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void input
#define gROOT
Definition TROOT.h:406
A specialized string object used for TTree selections.
Definition TCut.h:25
static TFile * Open(const char *name, Option_t *option="", const char *ftitle="", Int_t compress=ROOT::RCompressionSetting::EDefaults::kUseCompiledDefault, Int_t netopt=0)
Create / open a file.
Definition TFile.cxx:4089
static Bool_t SetCacheFileDir(std::string_view cacheDir, Bool_t operateDisconnected=kTRUE, Bool_t forceCacheread=kFALSE)
Sets the directory where to locally stage/cache remote files.
Definition TFile.cxx:4626
static Tools & Instance()
Definition Tools.cxx:71
std::vector< TString > SplitString(const TString &theOpt, const char separator) const
splits the option string at 'separator' and fills the list 'splitV' with the primitive strings
Definition Tools.cxx:1199
@ kFisher
Definition Types.h:82
@ kTMlpANN
Definition Types.h:85
@ kPDEFoam
Definition Types.h:94
@ kLikelihood
Definition Types.h:79
@ kHMatrix
Definition Types.h:81
@ kRuleFit
Definition Types.h:88
@ kCFMlpANN
Definition Types.h:84
Basic string class.
Definition TString.h:139
Bool_t IsNull() const
Definition TString.h:414
A TTree represents a columnar dataset.
Definition TTree.h:79
Tools & gTools()
void TMVAGui(const char *fName="TMVA.root", TString dataset="")
Author
Andreas Hoecker

Definition in file TMVAClassification.C.