As input data is used a toy-MC sample consisting of four Gaussian-distributed and linearly correlated input variables. The methods to be used can be switched on and off by means of booleans, or via the prompt command, for example: 
 (note that the backslashes are mandatory) If no method given, a default set of classifiers is used. The output file "TMVAC.root" can be analysed with the use of dedicated macros (simply say: root -l <macro.C>), which can be conveniently invoked through a GUI that will appear at the end of the run of this macro. You can also launch the GUI in another ROOT session via the command: 
 
 
==> Start TMVAClassification
--- TMVAClassification       : Using input file: /github/home/ROOT-CI/build/tutorials/machine_learning/data/tmva_class_example.root
DataSetInfo              : [dataset] : Added class "Signal"
                         : Add Tree TreeS of type Signal with 6000 events
DataSetInfo              : [dataset] : Added class "Background"
                         : Add Tree TreeB of type Background with 6000 events
Factory                  : Booking method: ␛[1mCuts␛[0m
                         : 
                         : Use optimization method: "Monte Carlo"
                         : Use efficiency computation method: "Event Selection"
                         : Use "FSmart" cuts for variable: 'myvar1'
                         : Use "FSmart" cuts for variable: 'myvar2'
                         : Use "FSmart" cuts for variable: 'var3'
                         : Use "FSmart" cuts for variable: 'var4'
Factory                  : Booking method: ␛[1mCutsD␛[0m
                         : 
CutsD                    : [dataset] : Create Transformation "Decorrelate" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
                         : Use optimization method: "Monte Carlo"
                         : Use efficiency computation method: "Event Selection"
                         : Use "FSmart" cuts for variable: 'myvar1'
                         : Use "FSmart" cuts for variable: 'myvar2'
                         : Use "FSmart" cuts for variable: 'var3'
                         : Use "FSmart" cuts for variable: 'var4'
Factory                  : Booking method: ␛[1mLikelihood␛[0m
                         : 
Factory                  : Booking method: ␛[1mLikelihoodPCA␛[0m
                         : 
LikelihoodPCA            : [dataset] : Create Transformation "PCA" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
Factory                  : Booking method: ␛[1mPDERS␛[0m
                         : 
Factory                  : Booking method: ␛[1mPDEFoam␛[0m
                         : 
Factory                  : Booking method: ␛[1mKNN␛[0m
                         : 
Factory                  : Booking method: ␛[1mLD␛[0m
                         : 
                         : Rebuilding Dataset dataset
                         : Building event vectors for type 2 Signal
                         : Dataset[dataset] :  create input formulas for tree TreeS
                         : Building event vectors for type 2 Background
                         : Dataset[dataset] :  create input formulas for tree TreeB
DataSetFactory           : [dataset] : Number of events in input trees
                         : 
                         : 
                         : Number of training and testing events
                         : ---------------------------------------------------------------------------
                         : Signal     -- training events            : 1000
                         : Signal     -- testing events             : 5000
                         : Signal     -- training and testing events: 6000
                         : Background -- training events            : 1000
                         : Background -- testing events             : 5000
                         : Background -- training and testing events: 6000
                         : 
DataSetInfo              : Correlation matrix (Signal):
                         : ----------------------------------------
                         :           myvar1  myvar2    var3    var4
                         :  myvar1:  +1.000  -0.007  +0.754  +0.922
                         :  myvar2:  -0.007  +1.000  -0.065  +0.083
                         :    var3:  +0.754  -0.065  +1.000  +0.836
                         :    var4:  +0.922  +0.083  +0.836  +1.000
                         : ----------------------------------------
DataSetInfo              : Correlation matrix (Background):
                         : ----------------------------------------
                         :           myvar1  myvar2    var3    var4
                         :  myvar1:  +1.000  -0.073  +0.784  +0.925
                         :  myvar2:  -0.073  +1.000  -0.142  +0.019
                         :    var3:  +0.784  -0.142  +1.000  +0.844
                         :    var4:  +0.925  +0.019  +0.844  +1.000
                         : ----------------------------------------
DataSetFactory           : [dataset] :  
                         : 
Factory                  : Booking method: ␛[1mFDA_GA␛[0m
                         : 
                         : Create parameter interval for parameter 0 : [-1,1]
                         : Create parameter interval for parameter 1 : [-10,10]
                         : Create parameter interval for parameter 2 : [-10,10]
                         : Create parameter interval for parameter 3 : [-10,10]
                         : Create parameter interval for parameter 4 : [-10,10]
                         : User-defined formula string       : "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3"
                         : TFormula-compatible formula string: "[0]+[1]*[5]+[2]*[6]+[3]*[7]+[4]*[8]"
Factory                  : Booking method: ␛[1mMLPBNN␛[0m
                         : 
MLPBNN                   : [dataset] : Create Transformation "N" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
MLPBNN                   : Building Network. 
                         : Initializing weights
Factory                  : Booking method: ␛[1mDNN_CPU␛[0m
                         : 
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=N:WeightInitialization=XAVIERUNIFORM:Layout=TANH|128,TANH|128,TANH|128,LINEAR:TrainingStrategy=LearningRate=1e-2,Momentum=0.9,ConvergenceSteps=20,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,DropConfig=0.0+0.5+0.5+0.5:Architecture=CPU"
                         : The following options are set:
                         : - By User:
                         :     <none>
                         : - Default:
                         :     Boost_num: "0" [Number of times the classifier will be boosted]
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=N:WeightInitialization=XAVIERUNIFORM:Layout=TANH|128,TANH|128,TANH|128,LINEAR:TrainingStrategy=LearningRate=1e-2,Momentum=0.9,ConvergenceSteps=20,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,DropConfig=0.0+0.5+0.5+0.5:Architecture=CPU"
                         : The following options are set:
                         : - By User:
                         :     V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
                         :     VarTransform: "N" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
                         :     H: "False" [Print method-specific help message]
                         :     Layout: "TANH|128,TANH|128,TANH|128,LINEAR" [Layout of the network.]
                         :     ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
                         :     WeightInitialization: "XAVIERUNIFORM" [Weight initialization strategy]
                         :     Architecture: "CPU" [Which architecture to perform the training on.]
                         :     TrainingStrategy: "LearningRate=1e-2,Momentum=0.9,ConvergenceSteps=20,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,DropConfig=0.0+0.5+0.5+0.5" [Defines the training strategies.]
                         : - Default:
                         :     VerbosityLevel: "Default" [Verbosity level]
                         :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
                         :     IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
                         :     InputLayout: "0|0|0" [The Layout of the input]
                         :     BatchLayout: "0|0|0" [The Layout of the batch]
                         :     RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
                         :     ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
DNN_CPU                  : [dataset] : Create Transformation "N" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
                         : Will now use the CPU architecture with BLAS and IMT support !
Factory                  : Booking method: ␛[1mSVM␛[0m
                         : 
SVM                      : [dataset] : Create Transformation "Norm" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
Factory                  : Booking method: ␛[1mBDT␛[0m
                         : 
Factory                  : Booking method: ␛[1mRuleFit␛[0m
                         : 
Factory                  : ␛[1mTrain all methods␛[0m
Factory                  : [dataset] : Create Transformation "I" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
Factory                  : [dataset] : Create Transformation "D" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
Factory                  : [dataset] : Create Transformation "P" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
Factory                  : [dataset] : Create Transformation "G" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
Factory                  : [dataset] : Create Transformation "D" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'myvar1' <---> Output : variable 'myvar1'
                         : Input : variable 'myvar2' <---> Output : variable 'myvar2'
                         : Input : variable 'var3' <---> Output : variable 'var3'
                         : Input : variable 'var4' <---> Output : variable 'var4'
TFHandler_Factory        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:  -0.082743     1.7823   [    -9.2312     7.0719 ]
                         :   myvar2:  -0.056644     1.0667   [    -3.7067     4.0291 ]
                         :     var3:  -0.045349     1.0930   [    -5.1570     4.1507 ]
                         :     var4:    0.10542     1.2849   [    -6.3160     4.5211 ]
                         : -----------------------------------------------------------
                         : Preparing the Decorrelation transformation...
TFHandler_Factory        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:   -0.16999     1.0000   [    -4.4163     3.9983 ]
                         :   myvar2:  -0.080682     1.0000   [    -3.4441     3.7507 ]
                         :     var3:   -0.14363     1.0000   [    -3.7799     3.6146 ]
                         :     var4:    0.32786     1.0000   [    -3.3861     3.3152 ]
                         : -----------------------------------------------------------
                         : Preparing the Principle Component (PCA) transformation...
TFHandler_Factory        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:  -0.018073     2.3492   [    -12.291     8.9889 ]
                         :   myvar2:   0.035051     1.0778   [    -4.0607     3.7534 ]
                         :     var3:  0.0032257    0.59616   [    -2.0543     1.9480 ]
                         :     var4: -0.0086473    0.35251   [    -1.1198     1.0790 ]
                         : -----------------------------------------------------------
                         : Preparing the Gaussian transformation...
                         : Preparing the Decorrelation transformation...
TFHandler_Factory        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:  0.0014078     1.0000   [    -3.3797     8.1193 ]
                         :   myvar2:   0.037752     1.0000   [    -3.1738     5.6933 ]
                         :     var3:   0.031566     1.0000   [    -3.2994     7.5070 ]
                         :     var4:  -0.034671     1.0000   [    -3.2568     8.8288 ]
                         : -----------------------------------------------------------
                         : Ranking input variables (method unspecific)...
IdTransformation         : Ranking result (top variable is best ranked)
                         : -------------------------------------
                         : Rank : Variable     : Separation
                         : -------------------------------------
                         :    1 : Variable 4   : 2.867e-01
                         :    2 : Variable 3   : 1.746e-01
                         :    3 : myvar1       : 1.144e-01
                         :    4 : Expression 2 : 3.020e-02
                         : -------------------------------------
Factory                  : Train method: Cuts for Classification
                         : 
FitterBase               : <MCFitter> Sampling, please be patient ...
                         : Elapsed time: 1.34 sec                           
                         : ------------------------------------------
Cuts                     : Cut values for requested signal efficiency: 0.1
                         : Corresponding background efficiency       : 0.00720161
                         : Transformation applied to input variables : None
                         : ------------------------------------------
                         : Cut[ 0]:   -4.57627 < myvar1 <=      1e+30
                         : Cut[ 1]:     -1e+30 < myvar2 <=    1.15847
                         : Cut[ 2]:   -3.33777 <   var3 <=      1e+30
                         : Cut[ 3]:    2.07512 <   var4 <=      1e+30
                         : ------------------------------------------
                         : ------------------------------------------
Cuts                     : Cut values for requested signal efficiency: 0.2
                         : Corresponding background efficiency       : 0.0223329
                         : Transformation applied to input variables : None
                         : ------------------------------------------
                         : Cut[ 0]:   -4.62984 < myvar1 <=      1e+30
                         : Cut[ 1]:     -1e+30 < myvar2 <=    1.09417
                         : Cut[ 2]:   -3.55402 <   var3 <=      1e+30
                         : Cut[ 3]:    1.56727 <   var4 <=      1e+30
                         : ------------------------------------------
                         : ------------------------------------------
Cuts                     : Cut values for requested signal efficiency: 0.3
                         : Corresponding background efficiency       : 0.0430248
                         : Transformation applied to input variables : None
                         : ------------------------------------------
                         : Cut[ 0]:   -8.53038 < myvar1 <=      1e+30
                         : Cut[ 1]:     -1e+30 < myvar2 <=    2.76914
                         : Cut[ 2]:   -2.59305 <   var3 <=      1e+30
                         : Cut[ 3]:    1.38904 <   var4 <=      1e+30
                         : ------------------------------------------
                         : ------------------------------------------
Cuts                     : Cut values for requested signal efficiency: 0.4
                         : Corresponding background efficiency       : 0.0734191
                         : Transformation applied to input variables : None
                         : ------------------------------------------
                         : Cut[ 0]:   -1.15162 < myvar1 <=      1e+30
                         : Cut[ 1]:     -1e+30 < myvar2 <=    3.31021
                         : Cut[ 2]:   -2.40236 <   var3 <=      1e+30
                         : Cut[ 3]:    1.06042 <   var4 <=      1e+30
                         : ------------------------------------------
                         : ------------------------------------------
Cuts                     : Cut values for requested signal efficiency: 0.5
                         : Corresponding background efficiency       : 0.116038
                         : Transformation applied to input variables : None
                         : ------------------------------------------
                         : Cut[ 0]:   -4.30702 < myvar1 <=      1e+30
                         : Cut[ 1]:     -1e+30 < myvar2 <=    4.05808
                         : Cut[ 2]:   -1.86509 <   var3 <=      1e+30
                         : Cut[ 3]:   0.791828 <   var4 <=      1e+30
                         : ------------------------------------------
                         : ------------------------------------------
Cuts                     : Cut values for requested signal efficiency: 0.6
                         : Corresponding background efficiency       : 0.178863
                         : Transformation applied to input variables : None
                         : ------------------------------------------
                         : Cut[ 0]:   -9.15649 < myvar1 <=      1e+30
                         : Cut[ 1]:     -1e+30 < myvar2 <=    2.72375
                         : Cut[ 2]:   -4.02004 <   var3 <=      1e+30
                         : Cut[ 3]:   0.491303 <   var4 <=      1e+30
                         : ------------------------------------------
                         : ------------------------------------------
Cuts                     : Cut values for requested signal efficiency: 0.7
                         : Corresponding background efficiency       : 0.232452
                         : Transformation applied to input variables : None
                         : ------------------------------------------
                         : Cut[ 0]:   -7.50811 < myvar1 <=      1e+30
                         : Cut[ 1]:     -1e+30 < myvar2 <=    2.79724
                         : Cut[ 2]:   -4.57539 <   var3 <=      1e+30
                         : Cut[ 3]:   0.204151 <   var4 <=      1e+30
                         : ------------------------------------------
                         : ------------------------------------------
Cuts                     : Cut values for requested signal efficiency: 0.8
                         : Corresponding background efficiency       : 0.314679
                         : Transformation applied to input variables : None
                         : ------------------------------------------
                         : Cut[ 0]:   -4.69509 < myvar1 <=      1e+30
                         : Cut[ 1]:     -1e+30 < myvar2 <=    4.04459
                         : Cut[ 2]:   -2.16556 <   var3 <=      1e+30
                         : Cut[ 3]: -0.0798275 <   var4 <=      1e+30
                         : ------------------------------------------
                         : ------------------------------------------
Cuts                     : Cut values for requested signal efficiency: 0.9
                         : Corresponding background efficiency       : 0.507037
                         : Transformation applied to input variables : None
                         : ------------------------------------------
                         : Cut[ 0]:    -4.1181 < myvar1 <=      1e+30
                         : Cut[ 1]:     -1e+30 < myvar2 <=    3.15831
                         : Cut[ 2]:   -2.09059 <   var3 <=      1e+30
                         : Cut[ 3]:   -0.61029 <   var4 <=      1e+30
                         : ------------------------------------------
                         : Elapsed time for training with 2000 events: 1.34 sec         
Cuts                     : [dataset] : Evaluation of Cuts on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.000229 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_Cuts.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_Cuts.class.C␛[0m
                         : TMVAC.root:/dataset/Method_Cuts/Cuts
Factory                  : Training finished
                         : 
Factory                  : Train method: CutsD for Classification
                         : 
                         : Preparing the Decorrelation transformation...
TFHandler_CutsD          : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:   -0.16999     1.0000   [    -4.4163     3.9983 ]
                         :   myvar2:  -0.080682     1.0000   [    -3.4441     3.7507 ]
                         :     var3:   -0.14363     1.0000   [    -3.7799     3.6146 ]
                         :     var4:    0.32786     1.0000   [    -3.3861     3.3152 ]
                         : -----------------------------------------------------------
TFHandler_CutsD          : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:   -0.16999     1.0000   [    -4.4163     3.9983 ]
                         :   myvar2:  -0.080682     1.0000   [    -3.4441     3.7507 ]
                         :     var3:   -0.14363     1.0000   [    -3.7799     3.6146 ]
                         :     var4:    0.32786     1.0000   [    -3.3861     3.3152 ]
                         : -----------------------------------------------------------
FitterBase               : <MCFitter> Sampling, please be patient ...
                         : Elapsed time: 1.16 sec                           
                         : ------------------------------------------------------------------------------------------------------------------------
CutsD                    : Cut values for requested signal efficiency: 0.1
                         : Corresponding background efficiency       : 0
                         : Transformation applied to input variables : "Deco"
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Cut[ 0]:     -1e+30 <  +     1.1113*[myvar1] +   0.054934*[myvar2] -    0.20558*[var3] -    0.79915*[var4] <=    1.67705
                         : Cut[ 1]:     -1e+30 <  +   0.054934*[myvar1] +    0.95764*[myvar2] +    0.13632*[var3] -    0.14903*[var4] <=  0.0771511
                         : Cut[ 2]:   -2.49817 <  -    0.20558*[myvar1] +    0.13632*[myvar2] +      1.715*[var3] -    0.71283*[var4] <=      1e+30
                         : Cut[ 3]:    1.59542 <  -    0.79915*[myvar1] -    0.14903*[myvar2] -    0.71283*[var3] +     2.0962*[var4] <=      1e+30
                         : ------------------------------------------------------------------------------------------------------------------------
                         : ------------------------------------------------------------------------------------------------------------------------
CutsD                    : Cut values for requested signal efficiency: 0.2
                         : Corresponding background efficiency       : 0.00318151
                         : Transformation applied to input variables : "Deco"
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Cut[ 0]:     -1e+30 <  +     1.1113*[myvar1] +   0.054934*[myvar2] -    0.20558*[var3] -    0.79915*[var4] <=    2.25342
                         : Cut[ 1]:     -1e+30 <  +   0.054934*[myvar1] +    0.95764*[myvar2] +    0.13632*[var3] -    0.14903*[var4] <=   0.817228
                         : Cut[ 2]:   -3.70325 <  -    0.20558*[myvar1] +    0.13632*[myvar2] +      1.715*[var3] -    0.71283*[var4] <=      1e+30
                         : Cut[ 3]:    1.48444 <  -    0.79915*[myvar1] -    0.14903*[myvar2] -    0.71283*[var3] +     2.0962*[var4] <=      1e+30
                         : ------------------------------------------------------------------------------------------------------------------------
                         : ------------------------------------------------------------------------------------------------------------------------
CutsD                    : Cut values for requested signal efficiency: 0.3
                         : Corresponding background efficiency       : 0.00804102
                         : Transformation applied to input variables : "Deco"
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Cut[ 0]:     -1e+30 <  +     1.1113*[myvar1] +   0.054934*[myvar2] -    0.20558*[var3] -    0.79915*[var4] <=    1.96798
                         : Cut[ 1]:     -1e+30 <  +   0.054934*[myvar1] +    0.95764*[myvar2] +    0.13632*[var3] -    0.14903*[var4] <=    3.75835
                         : Cut[ 2]:  -0.545008 <  -    0.20558*[myvar1] +    0.13632*[myvar2] +      1.715*[var3] -    0.71283*[var4] <=      1e+30
                         : Cut[ 3]:     1.1435 <  -    0.79915*[myvar1] -    0.14903*[myvar2] -    0.71283*[var3] +     2.0962*[var4] <=      1e+30
                         : ------------------------------------------------------------------------------------------------------------------------
                         : ------------------------------------------------------------------------------------------------------------------------
CutsD                    : Cut values for requested signal efficiency: 0.4
                         : Corresponding background efficiency       : 0.017039
                         : Transformation applied to input variables : "Deco"
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Cut[ 0]:     -1e+30 <  +     1.1113*[myvar1] +   0.054934*[myvar2] -    0.20558*[var3] -    0.79915*[var4] <=    1.88438
                         : Cut[ 1]:     -1e+30 <  +   0.054934*[myvar1] +    0.95764*[myvar2] +    0.13632*[var3] -    0.14903*[var4] <=    1.64561
                         : Cut[ 2]:   -1.42115 <  -    0.20558*[myvar1] +    0.13632*[myvar2] +      1.715*[var3] -    0.71283*[var4] <=      1e+30
                         : Cut[ 3]:    1.04216 <  -    0.79915*[myvar1] -    0.14903*[myvar2] -    0.71283*[var3] +     2.0962*[var4] <=      1e+30
                         : ------------------------------------------------------------------------------------------------------------------------
                         : ------------------------------------------------------------------------------------------------------------------------
CutsD                    : Cut values for requested signal efficiency: 0.5
                         : Corresponding background efficiency       : 0.029664
                         : Transformation applied to input variables : "Deco"
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Cut[ 0]:     -1e+30 <  +     1.1113*[myvar1] +   0.054934*[myvar2] -    0.20558*[var3] -    0.79915*[var4] <=    3.98405
                         : Cut[ 1]:     -1e+30 <  +   0.054934*[myvar1] +    0.95764*[myvar2] +    0.13632*[var3] -    0.14903*[var4] <=    2.83582
                         : Cut[ 2]:   -1.48704 <  -    0.20558*[myvar1] +    0.13632*[myvar2] +      1.715*[var3] -    0.71283*[var4] <=      1e+30
                         : Cut[ 3]:   0.922253 <  -    0.79915*[myvar1] -    0.14903*[myvar2] -    0.71283*[var3] +     2.0962*[var4] <=      1e+30
                         : ------------------------------------------------------------------------------------------------------------------------
                         : ------------------------------------------------------------------------------------------------------------------------
CutsD                    : Cut values for requested signal efficiency: 0.6
                         : Corresponding background efficiency       : 0.0566836
                         : Transformation applied to input variables : "Deco"
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Cut[ 0]:     -1e+30 <  +     1.1113*[myvar1] +   0.054934*[myvar2] -    0.20558*[var3] -    0.79915*[var4] <=    2.63541
                         : Cut[ 1]:     -1e+30 <  +   0.054934*[myvar1] +    0.95764*[myvar2] +    0.13632*[var3] -    0.14903*[var4] <=     2.4807
                         : Cut[ 2]:   -1.54361 <  -    0.20558*[myvar1] +    0.13632*[myvar2] +      1.715*[var3] -    0.71283*[var4] <=      1e+30
                         : Cut[ 3]:   0.717859 <  -    0.79915*[myvar1] -    0.14903*[myvar2] -    0.71283*[var3] +     2.0962*[var4] <=      1e+30
                         : ------------------------------------------------------------------------------------------------------------------------
                         : ------------------------------------------------------------------------------------------------------------------------
CutsD                    : Cut values for requested signal efficiency: 0.7
                         : Corresponding background efficiency       : 0.0968001
                         : Transformation applied to input variables : "Deco"
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Cut[ 0]:     -1e+30 <  +     1.1113*[myvar1] +   0.054934*[myvar2] -    0.20558*[var3] -    0.79915*[var4] <=    3.56177
                         : Cut[ 1]:     -1e+30 <  +   0.054934*[myvar1] +    0.95764*[myvar2] +    0.13632*[var3] -    0.14903*[var4] <=    2.64808
                         : Cut[ 2]:   -2.79353 <  -    0.20558*[myvar1] +    0.13632*[myvar2] +      1.715*[var3] -    0.71283*[var4] <=      1e+30
                         : Cut[ 3]:   0.619348 <  -    0.79915*[myvar1] -    0.14903*[myvar2] -    0.71283*[var3] +     2.0962*[var4] <=      1e+30
                         : ------------------------------------------------------------------------------------------------------------------------
                         : ------------------------------------------------------------------------------------------------------------------------
CutsD                    : Cut values for requested signal efficiency: 0.8
                         : Corresponding background efficiency       : 0.155161
                         : Transformation applied to input variables : "Deco"
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Cut[ 0]:     -1e+30 <  +     1.1113*[myvar1] +   0.054934*[myvar2] -    0.20558*[var3] -    0.79915*[var4] <=    3.97175
                         : Cut[ 1]:     -1e+30 <  +   0.054934*[myvar1] +    0.95764*[myvar2] +    0.13632*[var3] -    0.14903*[var4] <=    3.36992
                         : Cut[ 2]:   -3.81224 <  -    0.20558*[myvar1] +    0.13632*[myvar2] +      1.715*[var3] -    0.71283*[var4] <=      1e+30
                         : Cut[ 3]:   0.403725 <  -    0.79915*[myvar1] -    0.14903*[myvar2] -    0.71283*[var3] +     2.0962*[var4] <=      1e+30
                         : ------------------------------------------------------------------------------------------------------------------------
                         : ------------------------------------------------------------------------------------------------------------------------
CutsD                    : Cut values for requested signal efficiency: 0.9
                         : Corresponding background efficiency       : 0.31419
                         : Transformation applied to input variables : "Deco"
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Cut[ 0]:     -1e+30 <  +     1.1113*[myvar1] +   0.054934*[myvar2] -    0.20558*[var3] -    0.79915*[var4] <=    3.35782
                         : Cut[ 1]:     -1e+30 <  +   0.054934*[myvar1] +    0.95764*[myvar2] +    0.13632*[var3] -    0.14903*[var4] <=    2.86136
                         : Cut[ 2]:   -3.62987 <  -    0.20558*[myvar1] +    0.13632*[myvar2] +      1.715*[var3] -    0.71283*[var4] <=      1e+30
                         : Cut[ 3]:   0.023156 <  -    0.79915*[myvar1] -    0.14903*[myvar2] -    0.71283*[var3] +     2.0962*[var4] <=      1e+30
                         : ------------------------------------------------------------------------------------------------------------------------
                         : Elapsed time for training with 2000 events: 1.16 sec         
CutsD                    : [dataset] : Evaluation of CutsD on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.000816 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_CutsD.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_CutsD.class.C␛[0m
                         : TMVAC.root:/dataset/Method_Cuts/CutsD
Factory                  : Training finished
                         : 
Factory                  : Train method: Likelihood for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ Likelihood ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : The maximum-likelihood classifier models the data with probability 
                         : density functions (PDF) reproducing the signal and background
                         : distributions of the input variables. Correlations among the 
                         : variables are ignored.
                         : 
                         : ␛[1m--- Performance optimisation:␛[0m
                         : 
                         : Required for good performance are decorrelated input variables
                         : (PCA transformation via the option "VarTransform=Decorrelate"
                         : may be tried). Irreducible non-linear correlations may be reduced
                         : by precombining strongly correlated input variables, or by simply
                         : removing one of the variables.
                         : 
                         : ␛[1m--- Performance tuning via configuration options:␛[0m
                         : 
                         : High fidelity PDF estimates are mandatory, i.e., sufficient training 
                         : statistics is required to populate the tails of the distributions
                         : It would be a surprise if the default Spline or KDE kernel parameters
                         : provide a satisfying fit to the data. The user is advised to properly
                         : tune the events per bin and smooth options in the spline cases
                         : individually per variable. If the KDE kernel is used, the adaptive
                         : Gaussian kernel may lead to artefacts, so please always also try
                         : the non-adaptive one.
                         : 
                         : All tuning parameters must be adjusted individually for each input
                         : variable!
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
                         : Filling reference histograms
                         : Building PDF out of reference histograms
                         : Elapsed time for training with 2000 events: 0.00875 sec         
Likelihood               : [dataset] : Evaluation of Likelihood on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.00111 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_Likelihood.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_Likelihood.class.C␛[0m
                         : TMVAC.root:/dataset/Method_Likelihood/Likelihood
Factory                  : Training finished
                         : 
Factory                  : Train method: LikelihoodPCA for Classification
                         : 
                         : Preparing the Principle Component (PCA) transformation...
TFHandler_LikelihoodPCA  : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:  -0.018073     2.3492   [    -12.291     8.9889 ]
                         :   myvar2:   0.035051     1.0778   [    -4.0607     3.7534 ]
                         :     var3:  0.0032257    0.59616   [    -2.0543     1.9480 ]
                         :     var4: -0.0086473    0.35251   [    -1.1198     1.0790 ]
                         : -----------------------------------------------------------
                         : Filling reference histograms
                         : Building PDF out of reference histograms
                         : Elapsed time for training with 2000 events: 0.00972 sec         
LikelihoodPCA            : [dataset] : Evaluation of LikelihoodPCA on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.00177 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_LikelihoodPCA.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_LikelihoodPCA.class.C␛[0m
                         : TMVAC.root:/dataset/Method_Likelihood/LikelihoodPCA
Factory                  : Training finished
                         : 
Factory                  : Train method: PDERS for Classification
                         : 
                         : Elapsed time for training with 2000 events: 0.00202 sec         
PDERS                    : [dataset] : Evaluation of PDERS on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.109 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_PDERS.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_PDERS.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: PDEFoam for Classification
                         : 
PDEFoam                  : NormMode=NUMEVENTS chosen. Note that only NormMode=EqualNumEvents ensures that Discriminant values correspond to signal probabilities.
                         : Build up discriminator foam
                         : Elapsed time: 0.132 sec                                 
                         : Elapsed time for training with 2000 events: 0.148 sec         
PDEFoam                  : [dataset] : Evaluation of PDEFoam on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.00776 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.weights.xml␛[0m
                         : writing foam DiscrFoam to file
                         : Foams written to file: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.weights_foams.root␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: KNN for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ KNN ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : The k-nearest neighbor (k-NN) algorithm is a multi-dimensional classification
                         : and regression algorithm. Similarly to other TMVA algorithms, k-NN uses a set of
                         : training events for which a classification category/regression target is known. 
                         : The k-NN method compares a test event to all training events using a distance 
                         : function, which is an Euclidean distance in a space defined by the input variables. 
                         : The k-NN method, as implemented in TMVA, uses a kd-tree algorithm to perform a
                         : quick search for the k events with shortest distance to the test event. The method
                         : returns a fraction of signal events among the k neighbors. It is recommended
                         : that a histogram which stores the k-NN decision variable is binned with k+1 bins
                         : between 0 and 1.
                         : 
                         : ␛[1m--- Performance tuning via configuration options: ␛[0m
                         : 
                         : The k-NN method estimates a density of signal and background events in a 
                         : neighborhood around the test event. The method assumes that the density of the 
                         : signal and background events is uniform and constant within the neighborhood. 
                         : k is an adjustable parameter and it determines an average size of the 
                         : neighborhood. Small k values (less than 10) are sensitive to statistical 
                         : fluctuations and large (greater than 100) values might not sufficiently capture  
                         : local differences between events in the training set. The speed of the k-NN
                         : method also increases with larger values of k. 
                         : 
                         : The k-NN method assigns equal weight to all input variables. Different scales 
                         : among the input variables is compensated using ScaleFrac parameter: the input 
                         : variables are scaled so that the widths for central ScaleFrac*100% events are 
                         : equal among all the input variables.
                         : 
                         : ␛[1m--- Additional configuration options: ␛[0m
                         : 
                         : The method inclues an option to use a Gaussian kernel to smooth out the k-NN
                         : response. The kernel re-weights events using a distance to the test event.
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
KNN                      : <Train> start...
                         : Reading 2000 events
                         : Number of signal events 1000
                         : Number of background events 1000
                         : Creating kd-tree with 2000 events
                         : Computing scale factor for 1d distributions: (ifrac, bottom, top) = (80%, 10%, 90%)
ModulekNN                : Optimizing tree for 4 variables with 2000 values
                         : <Fill> Class 1 has     1000 events
                         : <Fill> Class 2 has     1000 events
                         : Elapsed time for training with 2000 events: 0.00182 sec         
KNN                      : [dataset] : Evaluation of KNN on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.0211 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_KNN.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_KNN.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: LD for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ LD ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : Linear discriminants select events by distinguishing the mean 
                         : values of the signal and background distributions in a trans- 
                         : formed variable space where linear correlations are removed.
                         : The LD implementation here is equivalent to the "Fisher" discriminant
                         : for classification, but also provides linear regression.
                         : 
                         :    (More precisely: the "linear discriminator" determines
                         :     an axis in the (correlated) hyperspace of the input 
                         :     variables such that, when projecting the output classes 
                         :     (signal and background) upon this axis, they are pushed 
                         :     as far as possible away from each other, while events
                         :     of a same class are confined in a close vicinity. The  
                         :     linearity property of this classifier is reflected in the 
                         :     metric with which "far apart" and "close vicinity" are 
                         :     determined: the covariance matrix of the discriminating
                         :     variable space.)
                         : 
                         : ␛[1m--- Performance optimisation:␛[0m
                         : 
                         : Optimal performance for the linear discriminant is obtained for 
                         : linearly correlated Gaussian-distributed variables. Any deviation
                         : from this ideal reduces the achievable separation power. In 
                         : particular, no discrimination at all is achieved for a variable
                         : that has the same sample mean for signal and background, even if 
                         : the shapes of the distributions are very different. Thus, the linear 
                         : discriminant often benefits from a suitable transformation of the 
                         : input variables. For example, if a variable x in [-1,1] has a 
                         : a parabolic signal distributions, and a uniform background
                         : distributions, their mean value is zero in both cases, leading 
                         : to no separation. The simple transformation x -> |x| renders this 
                         : variable powerful for the use in a linear discriminant.
                         : 
                         : ␛[1m--- Performance tuning via configuration options:␛[0m
                         : 
                         : <None>
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
LD                       : Results for LD coefficients:
                         : -----------------------
                         : Variable:  Coefficient:
                         : -----------------------
                         :   myvar1:       -0.284
                         :   myvar2:       -0.087
                         :     var3:       -0.139
                         :     var4:       +0.665
                         : (offset):       -0.052
                         : -----------------------
                         : Elapsed time for training with 2000 events: 0.000572 sec         
LD                       : [dataset] : Evaluation of LD on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.000229 sec       
                         : <CreateMVAPdfs> Separation from histogram (PDF): 0.517 (0.000)
                         : Dataset[dataset] : Evaluation of LD on training sample
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_LD.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_LD.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: FDA_GA for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ FDA_GA ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : The function discriminant analysis (FDA) is a classifier suitable 
                         : to solve linear or simple nonlinear discrimination problems.
                         : 
                         : The user provides the desired function with adjustable parameters
                         : via the configuration option string, and FDA fits the parameters to
                         : it, requiring the signal (background) function value to be as close
                         : as possible to 1 (0). Its advantage over the more involved and
                         : automatic nonlinear discriminators is the simplicity and transparency 
                         : of the discrimination expression. A shortcoming is that FDA will
                         : underperform for involved problems with complicated, phase space
                         : dependent nonlinear correlations.
                         : 
                         : Please consult the Users Guide for the format of the formula string
                         : and the allowed parameter ranges:
                         : documentation/tmva/UsersGuide/TMVAUsersGuide.pdf
                         : 
                         : ␛[1m--- Performance optimisation:␛[0m
                         : 
                         : The FDA performance depends on the complexity and fidelity of the
                         : user-defined discriminator function. As a general rule, it should
                         : be able to reproduce the discrimination power of any linear
                         : discriminant analysis. To reach into the nonlinear domain, it is
                         : useful to inspect the correlation profiles of the input variables,
                         : and add quadratic and higher polynomial terms between variables as
                         : necessary. Comparison with more involved nonlinear classifiers can
                         : be used as a guide.
                         : 
                         : ␛[1m--- Performance tuning via configuration options:␛[0m
                         : 
                         : Depending on the function used, the choice of "FitMethod" is
                         : crucial for getting valuable solutions with FDA. As a guideline it
                         : is recommended to start with "FitMethod=MINUIT". When more complex
                         : functions are used where MINUIT does not converge to reasonable
                         : results, the user should switch to non-gradient FitMethods such
                         : as GeneticAlgorithm (GA) or Monte Carlo (MC). It might prove to be
                         : useful to combine GA (or MC) with MINUIT by setting the option
                         : "Converger=MINUIT". GA (MC) will then set the starting parameters
                         : for MINUIT such that the basic quality of GA (MC) of finding global
                         : minima is combined with the efficacy of MINUIT of finding local
                         : minima.
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
FitterBase               : <GeneticFitter> Optimisation, please be patient ... (inaccurate progress timing for GA)
                         : Elapsed time: 0.189 sec                            
FDA_GA                   : Results for parameter fit using "GA" fitter:
                         : -----------------------
                         : Parameter:  Fit result:
                         : -----------------------
                         :    Par(0):    0.360294
                         :    Par(1):           0
                         :    Par(2):           0
                         :    Par(3):           0
                         :    Par(4):    0.390594
                         : -----------------------
                         : Discriminator expression: "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3"
                         : Value of estimator at minimum: 0.49768
                         : Elapsed time for training with 2000 events: 0.201 sec         
FDA_GA                   : [dataset] : Evaluation of FDA_GA on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.000281 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_FDA_GA.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_FDA_GA.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: MLPBNN for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ MLPBNN ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : The MLP artificial neural network (ANN) is a traditional feed-
                         : forward multilayer perceptron implementation. The MLP has a user-
                         : defined hidden layer architecture, while the number of input (output)
                         : nodes is determined by the input variables (output classes, i.e., 
                         : signal and one background). 
                         : 
                         : ␛[1m--- Performance optimisation:␛[0m
                         : 
                         : Neural networks are stable and performing for a large variety of 
                         : linear and non-linear classification problems. However, in contrast
                         : to (e.g.) boosted decision trees, the user is advised to reduce the 
                         : number of input variables that have only little discrimination power. 
                         : 
                         : In the tests we have carried out so far, the MLP and ROOT networks
                         : (TMlpANN, interfaced via TMVA) performed equally well, with however
                         : a clear speed advantage for the MLP. The Clermont-Ferrand neural 
                         : net (CFMlpANN) exhibited worse classification performance in these
                         : tests, which is partly due to the slow convergence of its training
                         : (at least 10k training cycles are required to achieve approximately
                         : competitive results).
                         : 
                         : ␛[1mOvertraining: ␛[0monly the TMlpANN performs an explicit separation of the
                         : full training sample into independent training and validation samples.
                         : We have found that in most high-energy physics applications the 
                         : available degrees of freedom (training events) are sufficient to 
                         : constrain the weights of the relatively simple architectures required
                         : to achieve good performance. Hence no overtraining should occur, and 
                         : the use of validation samples would only reduce the available training
                         : information. However, if the performance on the training sample is 
                         : found to be significantly better than the one found with the inde-
                         : pendent test sample, caution is needed. The results for these samples 
                         : are printed to standard output at the end of each training job.
                         : 
                         : ␛[1m--- Performance tuning via configuration options:␛[0m
                         : 
                         : The hidden layer architecture for all ANNs is defined by the option
                         : "HiddenLayers=N+1,N,...", where here the first hidden layer has N+1
                         : neurons and the second N neurons (and so on), and where N is the number  
                         : of input variables. Excessive numbers of hidden layers should be avoided,
                         : in favour of more neurons in the first hidden layer.
                         : 
                         : The number of cycles should be above 500. As said, if the number of
                         : adjustable weights is small compared to the training sample size,
                         : using a large number of training samples should not lead to overtraining.
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
TFHandler_MLPBNN         : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.12229    0.21864   [    -1.0000     1.0000 ]
                         :   myvar2:  -0.056321    0.27578   [    -1.0000     1.0000 ]
                         :     var3:   0.098365    0.23486   [    -1.0000     1.0000 ]
                         :     var4:    0.18509    0.23712   [    -1.0000     1.0000 ]
                         : -----------------------------------------------------------
                         : Training Network
                         : 
                         : Finalizing handling of Regulator terms, trainE=0.757543 testE=0.739106
                         : Done with handling of Regulator terms
                         : Elapsed time for training with 2000 events: 1.2 sec         
MLPBNN                   : [dataset] : Evaluation of MLPBNN on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.00167 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_MLPBNN.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_MLPBNN.class.C␛[0m
                         : Write special histos to file: TMVAC.root:/dataset/Method_MLP/MLPBNN
Factory                  : Training finished
                         : 
Factory                  : Train method: DNN_CPU for Classification
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.12229    0.21864   [    -1.0000     1.0000 ]
                         :   myvar2:  -0.056321    0.27578   [    -1.0000     1.0000 ]
                         :     var3:   0.098365    0.23486   [    -1.0000     1.0000 ]
                         :     var4:    0.18509    0.23712   [    -1.0000     1.0000 ]
                         : -----------------------------------------------------------
                         : Start of deep neural network training on CPU using MT,  nthreads = 1
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.12229    0.21864   [    -1.0000     1.0000 ]
                         :   myvar2:  -0.056321    0.27578   [    -1.0000     1.0000 ]
                         :     var3:   0.098365    0.23486   [    -1.0000     1.0000 ]
                         :     var4:    0.18509    0.23712   [    -1.0000     1.0000 ]
                         : -----------------------------------------------------------
                         : *****   Deep Learning Network *****
DEEP NEURAL NETWORK:   Depth = 4  Input = ( 1, 1, 4 )  Batch size = 100  Loss function = C
   Layer 0   DENSE Layer:   ( Input =     4 , Width =   128 )  Output = (  1 ,   100 ,   128 )   Activation Function = Tanh
   Layer 1   DENSE Layer:   ( Input =   128 , Width =   128 )  Output = (  1 ,   100 ,   128 )   Activation Function = Tanh    Dropout prob. = 0.5
   Layer 2   DENSE Layer:   ( Input =   128 , Width =   128 )  Output = (  1 ,   100 ,   128 )   Activation Function = Tanh    Dropout prob. = 0.5
   Layer 3   DENSE Layer:   ( Input =   128 , Width =     1 )  Output = (  1 ,   100 ,     1 )   Activation Function = Identity   Dropout prob. = 0.5
                         : Using 1600 events for training and 400 for testing
                         : Compute initial loss  on the validation data 
                         : Training phase 1 of 1:  Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.01 regularization 0 minimum error = 0.676067
                         : --------------------------------------------------------------
                         :      Epoch |   Train Err.   Val. Err.  t(s)/epoch   t(s)/Loss   nEvents/s Conv. Steps
                         : --------------------------------------------------------------
                         :    Start epoch iteration ...
                         :          1 Minimum Test error found - save the configuration 
                         :          1 |     0.571925    0.451727    0.108627  0.00994584     16213.9           0
                         :          2 |     0.456749    0.452872    0.108396  0.00908749     16111.5           1
                         :          3 |     0.415727      0.4554     0.11147  0.00900988     15615.8           2
                         :          4 |     0.435883    0.454354    0.108242  0.00924581     16162.2           3
                         :          5 |     0.403288    0.477806    0.107676   0.0090654     16225.4           4
                         :          6 Minimum Test error found - save the configuration 
                         :          6 |     0.430997    0.437609    0.113258  0.00916101     15370.3           0
                         :          7 |     0.418977    0.472289    0.108444  0.00898713     16087.3           1
                         :          8 |     0.399623    0.485865    0.108202  0.00915254     16153.6           2
                         :          9 |     0.413762    0.450915    0.108912  0.00928517     16059.9           3
                         :         10 |     0.395192    0.460594    0.109441  0.00955649     16018.4           4
                         :         11 |     0.405949    0.473195    0.117498  0.00929996     14787.7           5
                         :         12 |     0.411948    0.455692    0.111292  0.00924305     15678.8           6
                         :         13 |     0.431238    0.473217    0.125522  0.00939127     13777.6           7
                         :         14 |      0.43037    0.530848    0.110003  0.00939084     15902.6           8
                         :         15 |     0.414437    0.467179    0.116597   0.0103059     15052.9           9
                         :         16 |     0.406156    0.452559    0.130279   0.0144609     13814.8          10
                         :         17 |     0.408978    0.448743    0.112145  0.00979797     15633.1          11
                         :         18 |     0.398412    0.515423    0.111673  0.00972444     15694.3          12
                         :         19 |     0.421338    0.474798    0.109389  0.00927652     15982.1          13
                         :         20 |       0.3981    0.458031    0.110171  0.00972219     15928.4          14
                         :         21 |     0.418807    0.486368    0.118606  0.00928523     14635.9          15
                         :         22 Minimum Test error found - save the configuration 
                         :         22 |     0.426567    0.436171     0.12961   0.0125654       13670           0
                         :         23 |     0.416648    0.483872    0.120552  0.00939039     14393.5           1
                         :         24 |     0.441061    0.476107    0.130138   0.0116324     13501.5           2
                         :         25 |     0.423217    0.473008    0.118762  0.00931601     14619.1           3
                         :         26 |     0.408992    0.478979    0.128589   0.0106972     13571.8           4
                         :         27 |     0.409783    0.458292    0.121914  0.00981175     14272.7           5
                         :         28 |     0.403687    0.462998    0.119218  0.00937119     14565.8           6
                         :         29 |     0.397536    0.481976    0.109271  0.00913523     15978.3           7
                         :         30 |     0.400821    0.497879    0.107196  0.00915074     16319.1           8
                         :         31 |     0.409354    0.442023    0.107566  0.00915861     16258.9           9
                         :         32 |      0.38954     0.46111      0.1097  0.00959155     15982.6          10
                         :         33 |     0.437993    0.449204    0.110275  0.00986377     15934.5          11
                         :         34 |     0.409257    0.483009    0.109606  0.00946529     15977.5          12
                         :         35 |     0.426534    0.450649    0.120637   0.0102937     14500.1          13
                         :         36 |     0.422762    0.472461    0.119063  0.00937931     14587.5          14
                         :         37 |      0.41017    0.447305    0.123559  0.00916822     13987.2          15
                         :         38 |     0.414412    0.507229    0.109386  0.00915968     15963.9          16
                         :         39 Minimum Test error found - save the configuration 
                         :         39 |     0.407657    0.435365    0.109298  0.00936647       16011           0
                         :         40 |      0.42062    0.457413    0.108312  0.00954551     16199.9           1
                         :         41 |     0.404443    0.475785    0.109142  0.00918711     16007.2           2
                         :         42 |     0.400818    0.458996    0.109019  0.00932811     16049.6           3
                         :         43 |     0.401223    0.480737    0.108748  0.00913571     16062.3           4
                         :         44 |     0.403895     0.48012    0.108926  0.00922152     16047.4           5
                         :         45 |     0.402771    0.440525    0.108872   0.0094235     16088.7           6
                         :         46 |      0.41018    0.443572    0.108724  0.00924077     16083.1           7
                         :         47 |     0.396009    0.455095    0.107582  0.00924062     16269.8           8
                         :         48 |     0.400088    0.506409    0.109116  0.00935922       16039           9
                         :         49 |     0.414603    0.511769    0.114981  0.00947529       15165          10
                         :         50 |     0.422663    0.558927     0.11169  0.00962169     15675.8          11
                         :         51 |     0.428603    0.489132    0.125547   0.0135124     14281.3          12
                         :         52 |     0.397312     0.47503    0.112812  0.00925621     15450.6          13
                         :         53 |     0.418751    0.455082    0.108509  0.00947023     16155.3          14
                         :         54 |     0.389345    0.466349    0.111911   0.0102083     15732.1          15
                         :         55 |     0.396698    0.446527    0.120388   0.0142511     15074.9          16
                         :         56 |     0.405108    0.492296    0.117178  0.00976093     14895.2          17
                         :         57 |     0.414874    0.522607    0.115401  0.00994309     15171.9          18
                         :         58 |      0.41531    0.467448    0.116141   0.0101702     15098.5          19
                         :         59 |     0.407709    0.459436    0.116353   0.0102538     15080.2          20
                         :         60 |      0.40926    0.464359    0.117636  0.00989595     14850.5          21
                         : 
                         : Elapsed time for training with 2000 events: 6.85 sec         
                         : Evaluate deep neural network on CPU using batches with size = 100
                         : 
DNN_CPU                  : [dataset] : Evaluation of DNN_CPU on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.0485 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_DNN_CPU.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_DNN_CPU.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: SVM for Classification
                         : 
TFHandler_SVM            : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.12229    0.21864   [    -1.0000     1.0000 ]
                         :   myvar2:  -0.056321    0.27578   [    -1.0000     1.0000 ]
                         :     var3:   0.098365    0.23486   [    -1.0000     1.0000 ]
                         :     var4:    0.18509    0.23712   [    -1.0000     1.0000 ]
                         : -----------------------------------------------------------
                         : Building SVM Working Set...with 2000 event instances
                         : Elapsed time for Working Set build: 0.0344 sec
                         : Sorry, no computing time forecast available for SVM, please wait ...
                         : Elapsed time: 0.173 sec                                          
                         : Elapsed time for training with 2000 events: 0.21 sec         
SVM                      : [dataset] : Evaluation of SVM on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.0252 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_SVM.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_SVM.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: BDT for Classification
                         : 
BDT                      : #events: (reweighted) sig: 1000 bkg: 1000
                         : #events: (unweighted) sig: 1000 bkg: 1000
                         : Training 850 Decision Trees ... patience please
                         : Elapsed time for training with 2000 events: 0.335 sec         
BDT                      : [dataset] : Evaluation of BDT on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.0788 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_BDT.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_BDT.class.C␛[0m
                         : TMVAC.root:/dataset/Method_BDT/BDT
Factory                  : Training finished
                         : 
Factory                  : Train method: RuleFit for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ RuleFit ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : This method uses a collection of so called rules to create a
                         : discriminating scoring function. Each rule consists of a series
                         : of cuts in parameter space. The ensemble of rules are created
                         : from a forest of decision trees, trained using the training data.
                         : Each node (apart from the root) corresponds to one rule.
                         : The scoring function is then obtained by linearly combining
                         : the rules. A fitting procedure is applied to find the optimum
                         : set of coefficients. The goal is to find a model with few rules
                         : but with a strong discriminating power.
                         : 
                         : ␛[1m--- Performance optimisation:␛[0m
                         : 
                         : There are two important considerations to make when optimising:
                         : 
                         :   1. Topology of the decision tree forest
                         :   2. Fitting of the coefficients
                         : 
                         : The maximum complexity of the rules is defined by the size of
                         : the trees. Large trees will yield many complex rules and capture
                         : higher order correlations. On the other hand, small trees will
                         : lead to a smaller ensemble with simple rules, only capable of
                         : modeling simple structures.
                         : Several parameters exists for controlling the complexity of the
                         : rule ensemble.
                         : 
                         : The fitting procedure searches for a minimum using a gradient
                         : directed path. Apart from step size and number of steps, the
                         : evolution of the path is defined by a cut-off parameter, tau.
                         : This parameter is unknown and depends on the training data.
                         : A large value will tend to give large weights to a few rules.
                         : Similarly, a small value will lead to a large set of rules
                         : with similar weights.
                         : 
                         : A final point is the model used; rules and/or linear terms.
                         : For a given training sample, the result may improve by adding
                         : linear terms. If best performance is obtained using only linear
                         : terms, it is very likely that the Fisher discriminant would be
                         : a better choice. Ideally the fitting procedure should be able to
                         : make this choice by giving appropriate weights for either terms.
                         : 
                         : ␛[1m--- Performance tuning via configuration options:␛[0m
                         : 
                         : I.  TUNING OF RULE ENSEMBLE:
                         : 
                         :    ␛[1mForestType  ␛[0m: Recommended is to use the default "AdaBoost".
                         :    ␛[1mnTrees      ␛[0m: More trees leads to more rules but also slow
                         :                  performance. With too few trees the risk is
                         :                  that the rule ensemble becomes too simple.
                         :    ␛[1mfEventsMin  ␛[0m
                         :    ␛[1mfEventsMax  ␛[0m: With a lower min, more large trees will be generated
                         :                  leading to more complex rules.
                         :                  With a higher max, more small trees will be
                         :                  generated leading to more simple rules.
                         :                  By changing this range, the average complexity
                         :                  of the rule ensemble can be controlled.
                         :    ␛[1mRuleMinDist ␛[0m: By increasing the minimum distance between
                         :                  rules, fewer and more diverse rules will remain.
                         :                  Initially it is a good idea to keep this small
                         :                  or zero and let the fitting do the selection of
                         :                  rules. In order to reduce the ensemble size,
                         :                  the value can then be increased.
                         : 
                         : II. TUNING OF THE FITTING:
                         : 
                         :    ␛[1mGDPathEveFrac ␛[0m: fraction of events in path evaluation
                         :                  Increasing this fraction will improve the path
                         :                  finding. However, a too high value will give few
                         :                  unique events available for error estimation.
                         :                  It is recommended to use the default = 0.5.
                         :    ␛[1mGDTau         ␛[0m: cutoff parameter tau
                         :                  By default this value is set to -1.0.
                         :                  This means that the cut off parameter is
                         :                  automatically estimated. In most cases
                         :                  this should be fine. However, you may want
                         :                  to fix this value if you already know it
                         :                  and want to reduce on training time.
                         :    ␛[1mGDTauPrec     ␛[0m: precision of estimated tau
                         :                  Increase this precision to find a more
                         :                  optimum cut-off parameter.
                         :    ␛[1mGDNStep       ␛[0m: number of steps in path search
                         :                  If the number of steps is too small, then
                         :                  the program will give a warning message.
                         : 
                         : III. WARNING MESSAGES
                         : 
                         : ␛[1mRisk(i+1)>=Risk(i) in path␛[0m
                         : ␛[1mChaotic behaviour of risk evolution.␛[0m
                         :                  The error rate was still decreasing at the end
                         :                  By construction the Risk should always decrease.
                         :                  However, if the training sample is too small or
                         :                  the model is overtrained, such warnings can
                         :                  occur.
                         :                  The warnings can safely be ignored if only a
                         :                  few (<3) occur. If more warnings are generated,
                         :                  the fitting fails.
                         :                  A remedy may be to increase the value
                         :                  ␛[1mGDValidEveFrac␛[0m to 1.0 (or a larger value).
                         :                  In addition, if ␛[1mGDPathEveFrac␛[0m is too high
                         :                  the same warnings may occur since the events
                         :                  used for error estimation are also used for
                         :                  path estimation.
                         :                  Another possibility is to modify the model - 
                         :                  See above on tuning the rule ensemble.
                         : 
                         : ␛[1mThe error rate was still decreasing at the end of the path␛[0m
                         :                  Too few steps in path! Increase ␛[1mGDNSteps␛[0m.
                         : 
                         : ␛[1mReached minimum early in the search␛[0m
                         :                  Minimum was found early in the fitting. This
                         :                  may indicate that the used step size ␛[1mGDStep␛[0m.
                         :                  was too large. Reduce it and rerun.
                         :                  If the results still are not OK, modify the
                         :                  model either by modifying the rule ensemble
                         :                  or add/remove linear terms
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
RuleFit                  : -------------------RULE ENSEMBLE SUMMARY------------------------
                         : Tree training method               : AdaBoost
                         : Number of events per tree          : 2000
                         : Number of trees                    : 20
                         : Number of generated rules          : 182
                         : Idem, after cleanup                : 68
                         : Average number of cuts per rule    :     2.81
                         : Spread in number of cuts per rules :     1.14
                         : ----------------------------------------------------------------
                         : 
                         : GD path scan - the scan stops when the max num. of steps is reached or a min is found
                         : Estimating the cutoff parameter tau. The estimated time is a pessimistic maximum.
                         : Best path found with tau = 0.0700 after 0.8 sec      
                         : Fitting model...
<WARNING>                : Risk(i+1)>=Risk(i) in path
                         : Risk(i+1)>=Risk(i) in path
                         : Risk(i+1)>=Risk(i) in path
                         : Risk(i+1)>=Risk(i) in path
<WARNING>                : Chaotic behaviour of risk evolution
                         : --- STOPPING MINIMISATION ---
                         : This may be OK if minimum is already found
                         : 
                         : Minimisation elapsed time : 0.088 sec                      
                         : ----------------------------------------------------------------
                         : Found minimum at step 800 with error = 0.552444
                         : Reason for ending loop: chaotic behaviour of risk
                         : ----------------------------------------------------------------
                         : Removed 20 out of a total of 68 rules with importance < 0.001
                         : 
                         : ================================================================
                         :                           M o d e l                             
                         : ================================================================
RuleFit                  : Offset (a0) = 3.61944
                         : ------------------------------------
                         : Linear model (weights unnormalised)
                         : ------------------------------------
                         : Variable :     Weights : Importance
                         : ------------------------------------
                         :   myvar1 :  -1.919e-01 :  0.534
                         :   myvar2 :  -3.383e-02 :  0.056
                         :     var3 :   1.376e-02 :  0.023
                         :     var4 :   4.973e-01 :  1.000
                         : ------------------------------------
                         : Number of rules = 48
                         : Printing the first 10 rules, ordered in importance.
                         : Rule    1 : Importance  = 0.4463
                         :             Cut  1 :              myvar2 <     -0.023
                         :             Cut  2 :              var4 <     -0.123
                         : Rule    2 : Importance  = 0.3602
                         :             Cut  1 :     -0.691 < myvar1             
                         :             Cut  2 :              var4 <      0.963
                         : Rule    3 : Importance  = 0.3405
                         :             Cut  1 :              myvar1 <       2.15
                         :             Cut  2 :              var4 <      -1.16
                         : Rule    4 : Importance  = 0.3341
                         :             Cut  1 :              var4 <       1.94
                         : Rule    5 : Importance  = 0.3248
                         :             Cut  1 :       1.25 < myvar1             
                         :             Cut  2 :     -0.725 < var3             
                         : Rule    6 : Importance  = 0.3206
                         :             Cut  1 :              myvar1 <       1.62
                         :             Cut  2 :     -0.023 < myvar2             
                         :             Cut  3 :              var4 <      0.245
                         : Rule    7 : Importance  = 0.3113
                         :             Cut  1 :     -0.725 < var3             
                         : Rule    8 : Importance  = 0.3053
                         :             Cut  1 :     -0.023 < myvar2             
                         :             Cut  2 :              var4 <     -0.123
                         : Rule    9 : Importance  = 0.2386
                         :             Cut  1 :              var3 <     -0.725
                         : Rule   10 : Importance  = 0.2294
                         :             Cut  1 :      -2.24 < myvar1             
                         : Skipping the next 38 rules
                         : ================================================================
                         : 
<WARNING>                : No input variable directory found - BUG?
                         : Elapsed time for training with 2000 events: 0.907 sec         
RuleFit                  : [dataset] : Evaluation of RuleFit on training sample (2000 events)
                         : Elapsed time for evaluation of 2000 events: 0.00102 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_RuleFit.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_RuleFit.class.C␛[0m
                         : TMVAC.root:/dataset/Method_RuleFit/RuleFit
Factory                  : Training finished
                         : 
                         : Ranking input variables (method specific)...
                         : No variable ranking supplied by classifier: Cuts
                         : No variable ranking supplied by classifier: CutsD
Likelihood               : Ranking result (top variable is best ranked)
                         : -------------------------------------
                         : Rank : Variable  : Delta Separation
                         : -------------------------------------
                         :    1 : var4      : 2.507e-02
                         :    2 : myvar2    : 1.672e-02
                         :    3 : myvar1    : 1.463e-02
                         :    4 : var3      : 9.479e-03
                         : -------------------------------------
LikelihoodPCA            : Ranking result (top variable is best ranked)
                         : -------------------------------------
                         : Rank : Variable  : Delta Separation
                         : -------------------------------------
                         :    1 : var4      : 3.021e-01
                         :    2 : myvar1    : 8.496e-02
                         :    3 : var3      : 2.592e-02
                         :    4 : myvar2    : 3.204e-03
                         : -------------------------------------
                         : No variable ranking supplied by classifier: PDERS
PDEFoam                  : Ranking result (top variable is best ranked)
                         : ----------------------------------------
                         : Rank : Variable  : Variable Importance
                         : ----------------------------------------
                         :    1 : var4      : 3.810e-01
                         :    2 : myvar1    : 2.381e-01
                         :    3 : var3      : 2.143e-01
                         :    4 : myvar2    : 1.667e-01
                         : ----------------------------------------
                         : No variable ranking supplied by classifier: KNN
LD                       : Ranking result (top variable is best ranked)
                         : ---------------------------------
                         : Rank : Variable  : Discr. power
                         : ---------------------------------
                         :    1 : var4      : 6.652e-01
                         :    2 : myvar1    : 2.840e-01
                         :    3 : var3      : 1.391e-01
                         :    4 : myvar2    : 8.718e-02
                         : ---------------------------------
                         : No variable ranking supplied by classifier: FDA_GA
MLPBNN                   : Ranking result (top variable is best ranked)
                         : -------------------------------
                         : Rank : Variable  : Importance
                         : -------------------------------
                         :    1 : var4      : 1.242e+00
                         :    2 : myvar1    : 9.523e-01
                         :    3 : myvar2    : 4.818e-01
                         :    4 : var3      : 4.040e-01
                         : -------------------------------
                         : No variable ranking supplied by classifier: DNN_CPU
                         : No variable ranking supplied by classifier: SVM
BDT                      : Ranking result (top variable is best ranked)
                         : ----------------------------------------
                         : Rank : Variable  : Variable Importance
                         : ----------------------------------------
                         :    1 : var4      : 2.758e-01
                         :    2 : myvar2    : 2.536e-01
                         :    3 : myvar1    : 2.507e-01
                         :    4 : var3      : 2.199e-01
                         : ----------------------------------------
RuleFit                  : Ranking result (top variable is best ranked)
                         : -------------------------------
                         : Rank : Variable  : Importance
                         : -------------------------------
                         :    1 : var4      : 1.000e+00
                         :    2 : myvar1    : 7.442e-01
                         :    3 : myvar2    : 6.695e-01
                         :    4 : var3      : 4.612e-01
                         : -------------------------------
TH1.Print Name  = TrainingHistory_DNN_CPU_trainingError, Entries= 0, Total sum= 24.9041
TH1.Print Name  = TrainingHistory_DNN_CPU_valError, Entries= 0, Total sum= 28.2407
Factory                  : === Destroy and recreate all methods via weight files for testing ===
                         : 
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_Cuts.weights.xml␛[0m
                         : Read cuts optimised using sample of MC events
                         : Reading 100 signal efficiency bins for 4 variables
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_CutsD.weights.xml␛[0m
                         : Read cuts optimised using sample of MC events
                         : Reading 100 signal efficiency bins for 4 variables
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_Likelihood.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_LikelihoodPCA.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_PDERS.weights.xml␛[0m
                         : signal and background scales: 0.001 0.001
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.weights.xml␛[0m
                         : Read foams from file: ␛[0;36mdataset/weights/TMVAClassification_PDEFoam.weights_foams.root␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_KNN.weights.xml␛[0m
                         : Creating kd-tree with 2000 events
                         : Computing scale factor for 1d distributions: (ifrac, bottom, top) = (80%, 10%, 90%)
ModulekNN                : Optimizing tree for 4 variables with 2000 values
                         : <Fill> Class 1 has     1000 events
                         : <Fill> Class 2 has     1000 events
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_LD.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_FDA_GA.weights.xml␛[0m
                         : User-defined formula string       : "(0)+(1)*x0+(2)*x1+(3)*x2+(4)*x3"
                         : TFormula-compatible formula string: "[0]+[1]*[5]+[2]*[6]+[3]*[7]+[4]*[8]"
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_MLPBNN.weights.xml␛[0m
MLPBNN                   : Building Network. 
                         : Initializing weights
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_DNN_CPU.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_SVM.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_BDT.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_RuleFit.weights.xml␛[0m
Factory                  : ␛[1mTest all methods␛[0m
Factory                  : Test method: Cuts for Classification performance
                         : 
Cuts                     : [dataset] : Evaluation of Cuts on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.000348 sec       
Factory                  : Test method: CutsD for Classification performance
                         : 
CutsD                    : [dataset] : Evaluation of CutsD on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.00238 sec       
Factory                  : Test method: Likelihood for Classification performance
                         : 
Likelihood               : [dataset] : Evaluation of Likelihood on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.00448 sec       
Factory                  : Test method: LikelihoodPCA for Classification performance
                         : 
LikelihoodPCA            : [dataset] : Evaluation of LikelihoodPCA on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.00836 sec       
Factory                  : Test method: PDERS for Classification performance
                         : 
PDERS                    : [dataset] : Evaluation of PDERS on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.477 sec       
Factory                  : Test method: PDEFoam for Classification performance
                         : 
PDEFoam                  : [dataset] : Evaluation of PDEFoam on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.0345 sec       
Factory                  : Test method: KNN for Classification performance
                         : 
KNN                      : [dataset] : Evaluation of KNN on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.102 sec       
Factory                  : Test method: LD for Classification performance
                         : 
LD                       : [dataset] : Evaluation of LD on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.00122 sec       
                         : Dataset[dataset] : Evaluation of LD on testing sample
Factory                  : Test method: FDA_GA for Classification performance
                         : 
FDA_GA                   : [dataset] : Evaluation of FDA_GA on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.000919 sec       
Factory                  : Test method: MLPBNN for Classification performance
                         : 
MLPBNN                   : [dataset] : Evaluation of MLPBNN on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.00712 sec       
Factory                  : Test method: DNN_CPU for Classification performance
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 1000
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.15875    0.21008   [    -1.0772     1.1019 ]
                         :   myvar2:  -0.052512    0.28764   [    -1.0979    0.99917 ]
                         :     var3:    0.14402    0.22647   [    -1.0428     1.1058 ]
                         :     var4:    0.24579    0.22520   [    -1.1202     1.0941 ]
                         : -----------------------------------------------------------
DNN_CPU                  : [dataset] : Evaluation of DNN_CPU on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.252 sec       
Factory                  : Test method: SVM for Classification performance
                         : 
SVM                      : [dataset] : Evaluation of SVM on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.144 sec       
Factory                  : Test method: BDT for Classification performance
                         : 
BDT                      : [dataset] : Evaluation of BDT on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.279 sec       
Factory                  : Test method: RuleFit for Classification performance
                         : 
RuleFit                  : [dataset] : Evaluation of RuleFit on testing sample (10000 events)
                         : Elapsed time for evaluation of 10000 events: 0.00552 sec       
Factory                  : ␛[1mEvaluate all methods␛[0m
Factory                  : Evaluate classifier: Cuts
                         : 
<WARNING>                : You have asked for histogram MVA_EFF_BvsS which does not seem to exist in *Results* .. better don't use it 
<WARNING>                : You have asked for histogram EFF_BVSS_TR which does not seem to exist in *Results* .. better don't use it 
TFHandler_Cuts           : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.21443     1.7124   [    -9.8605     7.9024 ]
                         :   myvar2:  -0.041911     1.1126   [    -4.0854     4.0259 ]
                         :     var3:    0.16712     1.0539   [    -5.3563     4.6430 ]
                         :     var4:    0.43437     1.2203   [    -6.9675     5.0307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: CutsD
                         : 
<WARNING>                : You have asked for histogram MVA_EFF_BvsS which does not seem to exist in *Results* .. better don't use it 
TFHandler_CutsD          : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:   -0.14549    0.97213   [    -5.4077     4.8658 ]
                         :   myvar2:  -0.070308     1.0437   [    -3.9101     3.8233 ]
                         :     var3:  -0.072822    0.96722   [    -4.3819     4.3335 ]
                         :     var4:    0.62627    0.92018   [    -3.9664     3.6405 ]
                         : -----------------------------------------------------------
<WARNING>                : You have asked for histogram EFF_BVSS_TR which does not seem to exist in *Results* .. better don't use it 
TFHandler_CutsD          : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:   -0.16999     1.0000   [    -4.4163     3.9983 ]
                         :   myvar2:  -0.080682     1.0000   [    -3.4441     3.7507 ]
                         :     var3:   -0.14363     1.0000   [    -3.7799     3.6146 ]
                         :     var4:    0.32786     1.0000   [    -3.3861     3.3152 ]
                         : -----------------------------------------------------------
TFHandler_CutsD          : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:   -0.14549    0.97213   [    -5.4077     4.8658 ]
                         :   myvar2:  -0.070308     1.0437   [    -3.9101     3.8233 ]
                         :     var3:  -0.072822    0.96722   [    -4.3819     4.3335 ]
                         :     var4:    0.62627    0.92018   [    -3.9664     3.6405 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: Likelihood
                         : 
Likelihood               : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_Likelihood     : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.21443     1.7124   [    -9.8605     7.9024 ]
                         :   myvar2:  -0.041911     1.1126   [    -4.0854     4.0259 ]
                         :     var3:    0.16712     1.0539   [    -5.3563     4.6430 ]
                         :     var4:    0.43437     1.2203   [    -6.9675     5.0307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: LikelihoodPCA
                         : 
TFHandler_LikelihoodPCA  : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:     1.3857     2.2495   [    -12.232     10.994 ]
                         :   myvar2:   -0.16933     1.1235   [    -4.1034     3.9180 ]
                         :     var3:   -0.20081    0.58158   [    -2.2789     1.9800 ]
                         :     var4:   -0.31202    0.33076   [    -1.3887    0.89743 ]
                         : -----------------------------------------------------------
LikelihoodPCA            : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_LikelihoodPCA  : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:     1.3857     2.2495   [    -12.232     10.994 ]
                         :   myvar2:   -0.16933     1.1235   [    -4.1034     3.9180 ]
                         :     var3:   -0.20081    0.58158   [    -2.2789     1.9800 ]
                         :     var4:   -0.31202    0.33076   [    -1.3887    0.89743 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: PDERS
                         : 
PDERS                    : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_PDERS          : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.21443     1.7124   [    -9.8605     7.9024 ]
                         :   myvar2:  -0.041911     1.1126   [    -4.0854     4.0259 ]
                         :     var3:    0.16712     1.0539   [    -5.3563     4.6430 ]
                         :     var4:    0.43437     1.2203   [    -6.9675     5.0307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: PDEFoam
                         : 
PDEFoam                  : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_PDEFoam        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.21443     1.7124   [    -9.8605     7.9024 ]
                         :   myvar2:  -0.041911     1.1126   [    -4.0854     4.0259 ]
                         :     var3:    0.16712     1.0539   [    -5.3563     4.6430 ]
                         :     var4:    0.43437     1.2203   [    -6.9675     5.0307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: KNN
                         : 
KNN                      : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_KNN            : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.21443     1.7124   [    -9.8605     7.9024 ]
                         :   myvar2:  -0.041911     1.1126   [    -4.0854     4.0259 ]
                         :     var3:    0.16712     1.0539   [    -5.3563     4.6430 ]
                         :     var4:    0.43437     1.2203   [    -6.9675     5.0307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: LD
                         : 
LD                       : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Also filling probability and rarity histograms (on request)...
TFHandler_LD             : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.21443     1.7124   [    -9.8605     7.9024 ]
                         :   myvar2:  -0.041911     1.1126   [    -4.0854     4.0259 ]
                         :     var3:    0.16712     1.0539   [    -5.3563     4.6430 ]
                         :     var4:    0.43437     1.2203   [    -6.9675     5.0307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: FDA_GA
                         : 
FDA_GA                   : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_FDA_GA         : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.21443     1.7124   [    -9.8605     7.9024 ]
                         :   myvar2:  -0.041911     1.1126   [    -4.0854     4.0259 ]
                         :     var3:    0.16712     1.0539   [    -5.3563     4.6430 ]
                         :     var4:    0.43437     1.2203   [    -6.9675     5.0307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: MLPBNN
                         : 
TFHandler_MLPBNN         : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.15875    0.21008   [    -1.0772     1.1019 ]
                         :   myvar2:  -0.052512    0.28764   [    -1.0979    0.99917 ]
                         :     var3:    0.14402    0.22647   [    -1.0428     1.1058 ]
                         :     var4:    0.24579    0.22520   [    -1.1202     1.0941 ]
                         : -----------------------------------------------------------
MLPBNN                   : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_MLPBNN         : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.15875    0.21008   [    -1.0772     1.1019 ]
                         :   myvar2:  -0.052512    0.28764   [    -1.0979    0.99917 ]
                         :     var3:    0.14402    0.22647   [    -1.0428     1.1058 ]
                         :     var4:    0.24579    0.22520   [    -1.1202     1.0941 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: DNN_CPU
                         : 
DNN_CPU                  : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 1000
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.12229    0.21864   [    -1.0000     1.0000 ]
                         :   myvar2:  -0.056321    0.27578   [    -1.0000     1.0000 ]
                         :     var3:   0.098365    0.23486   [    -1.0000     1.0000 ]
                         :     var4:    0.18509    0.23712   [    -1.0000     1.0000 ]
                         : -----------------------------------------------------------
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.15875    0.21008   [    -1.0772     1.1019 ]
                         :   myvar2:  -0.052512    0.28764   [    -1.0979    0.99917 ]
                         :     var3:    0.14402    0.22647   [    -1.0428     1.1058 ]
                         :     var4:    0.24579    0.22520   [    -1.1202     1.0941 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: SVM
                         : 
TFHandler_SVM            : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.15875    0.21008   [    -1.0772     1.1019 ]
                         :   myvar2:  -0.052512    0.28764   [    -1.0979    0.99917 ]
                         :     var3:    0.14402    0.22647   [    -1.0428     1.1058 ]
                         :     var4:    0.24579    0.22520   [    -1.1202     1.0941 ]
                         : -----------------------------------------------------------
SVM                      : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_SVM            : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.15875    0.21008   [    -1.0772     1.1019 ]
                         :   myvar2:  -0.052512    0.28764   [    -1.0979    0.99917 ]
                         :     var3:    0.14402    0.22647   [    -1.0428     1.1058 ]
                         :     var4:    0.24579    0.22520   [    -1.1202     1.0941 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: BDT
                         : 
BDT                      : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_BDT            : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.21443     1.7124   [    -9.8605     7.9024 ]
                         :   myvar2:  -0.041911     1.1126   [    -4.0854     4.0259 ]
                         :     var3:    0.16712     1.0539   [    -5.3563     4.6430 ]
                         :     var4:    0.43437     1.2203   [    -6.9675     5.0307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: RuleFit
                         : 
RuleFit                  : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_RuleFit        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :   myvar1:    0.21443     1.7124   [    -9.8605     7.9024 ]
                         :   myvar2:  -0.041911     1.1126   [    -4.0854     4.0259 ]
                         :     var3:    0.16712     1.0539   [    -5.3563     4.6430 ]
                         :     var4:    0.43437     1.2203   [    -6.9675     5.0307 ]
                         : -----------------------------------------------------------
                         : 
                         : Evaluation results ranked by best signal efficiency and purity (area)
                         : -------------------------------------------------------------------------------------------------------------------
                         : DataSet       MVA                       
                         : Name:         Method:          ROC-integ
                         : dataset       LD             : 0.923
                         : dataset       MLPBNN         : 0.921
                         : dataset       DNN_CPU        : 0.921
                         : dataset       LikelihoodPCA  : 0.919
                         : dataset       CutsD          : 0.912
                         : dataset       SVM            : 0.901
                         : dataset       RuleFit        : 0.880
                         : dataset       BDT            : 0.873
                         : dataset       KNN            : 0.828
                         : dataset       PDEFoam        : 0.812
                         : dataset       PDERS          : 0.798
                         : dataset       FDA_GA         : 0.793
                         : dataset       Cuts           : 0.791
                         : dataset       Likelihood     : 0.758
                         : -------------------------------------------------------------------------------------------------------------------
                         : 
                         : Testing efficiency compared to training efficiency (overtraining check)
                         : -------------------------------------------------------------------------------------------------------------------
                         : DataSet              MVA              Signal efficiency: from test sample (from training sample) 
                         : Name:                Method:          @B=0.01             @B=0.10            @B=0.30   
                         : -------------------------------------------------------------------------------------------------------------------
                         : dataset              LD             : 0.372 (0.335)       0.779 (0.708)      0.929 (0.927)
                         : dataset              MLPBNN         : 0.378 (0.353)       0.773 (0.712)      0.926 (0.922)
                         : dataset              DNN_CPU        : 0.377 (0.360)       0.776 (0.708)      0.925 (0.917)
                         : dataset              LikelihoodPCA  : 0.344 (0.302)       0.770 (0.684)      0.925 (0.917)
                         : dataset              CutsD          : 0.297 (0.327)       0.748 (0.708)      0.914 (0.894)
                         : dataset              SVM            : 0.336 (0.307)       0.724 (0.675)      0.894 (0.901)
                         : dataset              RuleFit        : 0.197 (0.204)       0.655 (0.681)      0.878 (0.897)
                         : dataset              BDT            : 0.274 (0.472)       0.644 (0.707)      0.855 (0.897)
                         : dataset              KNN            : 0.127 (0.184)       0.535 (0.570)      0.794 (0.850)
                         : dataset              PDEFoam        : 0.138 (0.178)       0.491 (0.519)      0.758 (0.767)
                         : dataset              PDERS          : 0.179 (0.172)       0.462 (0.449)      0.747 (0.755)
                         : dataset              FDA_GA         : 0.120 (0.106)       0.457 (0.457)      0.738 (0.770)
                         : dataset              Cuts           : 0.116 (0.128)       0.459 (0.458)      0.735 (0.785)
                         : dataset              Likelihood     : 0.092 (0.089)       0.383 (0.377)      0.686 (0.698)
                         : -------------------------------------------------------------------------------------------------------------------
                         : 
Dataset:dataset          : Created tree 'TestTree' with 10000 events
                         : 
Dataset:dataset          : Created tree 'TrainTree' with 2000 events
                         : 
Factory                  : ␛[1mThank you for using TMVA!␛[0m
                         : ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m
==> Wrote root file: TMVAC.root
==> TMVAClassification is done!
(int) 0