==> Start TMVAMulticlass
--- TMVAMulticlass: Using input file: /github/home/ROOT-CI/build/tutorials/tmva/data/tmva_multiclass_example.root
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree TreeS of type Signal with 2000 events
DataSetInfo : [dataset] : Added class "bg0"
: Add Tree TreeB0 of type bg0 with 2000 events
DataSetInfo : [dataset] : Added class "bg1"
: Add Tree TreeB1 of type bg1 with 2000 events
DataSetInfo : [dataset] : Added class "bg2"
: Add Tree TreeB2 of type bg2 with 2000 events
: Dataset[dataset] : Class index : 0 name : Signal
: Dataset[dataset] : Class index : 1 name : bg0
: Dataset[dataset] : Class index : 2 name : bg1
: Dataset[dataset] : Class index : 3 name : bg2
Factory : Booking method: ␛[1mBDTG␛[0m
:
: the option NegWeightTreatment=InverseBoostNegWeights does not exist for BoostType=Grad
: --> change to new default NegWeightTreatment=Pray
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree TreeS
: Building event vectors for type 2 bg0
: Dataset[dataset] : create input formulas for tree TreeB0
: Building event vectors for type 2 bg1
: Dataset[dataset] : create input formulas for tree TreeB1
: Building event vectors for type 2 bg2
: Dataset[dataset] : create input formulas for tree TreeB2
DataSetFactory : [dataset] : Number of events in input trees
:
:
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 1000
: Signal -- testing events : 1000
: Signal -- training and testing events: 2000
: bg0 -- training events : 1000
: bg0 -- testing events : 1000
: bg0 -- training and testing events: 2000
: bg1 -- training events : 1000
: bg1 -- testing events : 1000
: bg1 -- training and testing events: 2000
: bg2 -- training events : 1000
: bg2 -- testing events : 1000
: bg2 -- training and testing events: 2000
:
DataSetInfo : Correlation matrix (Signal):
: ----------------------------------------
: var1 var2 var3 var4
: var1: +1.000 +0.385 +0.621 +0.838
: var2: +0.385 +1.000 +0.698 +0.723
: var3: +0.621 +0.698 +1.000 +0.849
: var4: +0.838 +0.723 +0.849 +1.000
: ----------------------------------------
DataSetInfo : Correlation matrix (bg0):
: ----------------------------------------
: var1 var2 var3 var4
: var1: +1.000 +0.413 +0.612 +0.833
: var2: +0.413 +1.000 +0.728 +0.753
: var3: +0.612 +0.728 +1.000 +0.855
: var4: +0.833 +0.753 +0.855 +1.000
: ----------------------------------------
DataSetInfo : Correlation matrix (bg1):
: ----------------------------------------
: var1 var2 var3 var4
: var1: +1.000 +0.423 +0.619 +0.846
: var2: +0.423 +1.000 +0.705 +0.730
: var3: +0.619 +0.705 +1.000 +0.855
: var4: +0.846 +0.730 +0.855 +1.000
: ----------------------------------------
DataSetInfo : Correlation matrix (bg2):
: ----------------------------------------
: var1 var2 var3 var4
: var1: +1.000 -0.658 +0.032 -0.004
: var2: -0.658 +1.000 -0.000 +0.014
: var3: +0.032 -0.000 +1.000 -0.048
: var4: -0.004 +0.014 -0.048 +1.000
: ----------------------------------------
DataSetFactory : [dataset] :
:
Factory : Booking method: ␛[1mMLP␛[0m
:
MLP : Building Network.
: Initializing weights
Factory : Booking method: ␛[1mPDEFoam␛[0m
:
Factory : Booking method: ␛[1mDL_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=N:WeightInitialization=XAVIERUNIFORM:Architecture=GPU:Layout=TANH|100,TANH|50,TANH|10,LINEAR:TrainingStrategy=Optimizer=ADAM,LearningRate=1e-3,TestRepetitions=1,ConvergenceSteps=10,BatchSize=100,MaxEpochs=20"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=N:WeightInitialization=XAVIERUNIFORM:Architecture=GPU:Layout=TANH|100,TANH|50,TANH|10,LINEAR:TrainingStrategy=Optimizer=ADAM,LearningRate=1e-3,TestRepetitions=1,ConvergenceSteps=10,BatchSize=100,MaxEpochs=20"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "N" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: Layout: "TANH|100,TANH|50,TANH|10,LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIERUNIFORM" [Weight initialization strategy]
: Architecture: "GPU" [Which architecture to perform the training on.]
: TrainingStrategy: "Optimizer=ADAM,LearningRate=1e-3,TestRepetitions=1,ConvergenceSteps=10,BatchSize=100,MaxEpochs=20" [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: InputLayout: "0|0|0" [The Layout of the input]
: BatchLayout: "0|0|0" [The Layout of the batch]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
DL_CPU : [dataset] : Create Transformation "N" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'var1' <---> Output : variable 'var1'
: Input : variable 'var2' <---> Output : variable 'var2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
␛[31m<ERROR> : CUDA backend not enabled. Please make sure you have CUDA installed and it was successfully detected by CMAKE by using -Dtmva-gpu=On ␛[0m
: Will now use instead the CPU architecture !
: Multi-core CPU backend not enabled. For better performances, make sure you have a BLAS implementation and it was successfully detected by CMake as well that the imt CMake flag is set.
: Will use anyway the CPU architecture but with slower performance
Factory : ␛[1mTrain all methods␛[0m
Factory : [dataset] : Create Transformation "I" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'var1' <---> Output : variable 'var1'
: Input : variable 'var2' <---> Output : variable 'var2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : [dataset] : Create Transformation "D" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'var1' <---> Output : variable 'var1'
: Input : variable 'var2' <---> Output : variable 'var2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : [dataset] : Create Transformation "P" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'var1' <---> Output : variable 'var1'
: Input : variable 'var2' <---> Output : variable 'var2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : [dataset] : Create Transformation "G" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'var1' <---> Output : variable 'var1'
: Input : variable 'var2' <---> Output : variable 'var2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
Factory : [dataset] : Create Transformation "D" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'var1' <---> Output : variable 'var1'
: Input : variable 'var2' <---> Output : variable 'var2'
: Input : variable 'var3' <---> Output : variable 'var3'
: Input : variable 'var4' <---> Output : variable 'var4'
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.052185 1.0190 [ -4.0592 3.2645 ]
: var2: 0.33312 1.0446 [ -3.6891 3.7877 ]
: var3: 0.10463 1.1205 [ -3.6296 3.9200 ]
: var4: -0.078123 1.2764 [ -4.8486 4.3625 ]
: -----------------------------------------------------------
: Preparing the Decorrelation transformation...
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.089502 1.0000 [ -3.4349 2.7570 ]
: var2: 0.38543 1.0000 [ -3.3765 3.1055 ]
: var3: 0.052636 1.0000 [ -2.8007 3.1004 ]
: var4: -0.20867 1.0000 [ -3.0012 2.5822 ]
: -----------------------------------------------------------
: Preparing the Principle Component (PCA) transformation...
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1:-2.3297e-10 1.8127 [ -7.2691 6.3617 ]
: var2:-3.1381e-10 0.89464 [ -2.7283 2.6323 ]
: var3:-2.2463e-10 0.73955 [ -2.6363 2.4256 ]
: var4:-9.8869e-11 0.61727 [ -1.7822 2.2327 ]
: -----------------------------------------------------------
: Preparing the Gaussian transformation...
: Preparing the Decorrelation transformation...
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.0071986 1.0000 [ -2.5427 5.8540 ]
: var2: 0.0087421 1.0000 [ -2.8611 4.9796 ]
: var3: 0.0090897 1.0000 [ -2.9572 5.6365 ]
: var4: 0.0084612 1.0000 [ -3.0233 5.7479 ]
: -----------------------------------------------------------
: Ranking input variables (method unspecific)...
Factory : Train method: BDTG for Multiclass classification
:
: Training 1000 Decision Trees ... patience please
: Elapsed time for training with 4000 events: 2.87 sec
: Dataset[dataset] : Create results for training
: Dataset[dataset] : Multiclass evaluation of BDTG on training sample
: Dataset[dataset] : Elapsed time for evaluation of 4000 events: 0.72 sec
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAMulticlass_BDTG.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAMulticlass_BDTG.class.C␛[0m
: TMVAMulticlass.root:/dataset/Method_BDT/BDTG
Factory : Training finished
:
Factory : Train method: MLP for Multiclass classification
:
: Training Network
:
: Elapsed time for training with 4000 events: 11.7 sec
: Dataset[dataset] : Create results for training
: Dataset[dataset] : Multiclass evaluation of MLP on training sample
: Dataset[dataset] : Elapsed time for evaluation of 4000 events: 0.00414 sec
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAMulticlass_MLP.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAMulticlass_MLP.class.C␛[0m
: Write special histos to file: TMVAMulticlass.root:/dataset/Method_MLP/MLP
Factory : Training finished
:
Factory : Train method: PDEFoam for Multiclass classification
:
: Build up multiclass foam 0
: Elapsed time: 0.342 sec
: Build up multiclass foam 1
: Elapsed time: 0.326 sec
: Build up multiclass foam 2
: Elapsed time: 0.318 sec
: Build up multiclass foam 3
: Elapsed time: 0.211 sec
: Elapsed time for training with 4000 events: 1.27 sec
: Dataset[dataset] : Create results for training
: Dataset[dataset] : Multiclass evaluation of PDEFoam on training sample
: Dataset[dataset] : Elapsed time for evaluation of 4000 events: 0.0501 sec
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAMulticlass_PDEFoam.weights.xml␛[0m
: writing foam MultiClassFoam0 to file
: writing foam MultiClassFoam1 to file
: writing foam MultiClassFoam2 to file
: writing foam MultiClassFoam3 to file
: Foams written to file: ␛[0;36mdataset/weights/TMVAMulticlass_PDEFoam.weights_foams.root␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAMulticlass_PDEFoam.class.C␛[0m
Factory : Training finished
:
Factory : Train method: DL_CPU for Multiclass classification
:
TFHandler_DL_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.12276 0.27828 [ -1.0000 1.0000 ]
: var2: 0.075909 0.27943 [ -1.0000 1.0000 ]
: var3: -0.010745 0.29684 [ -1.0000 1.0000 ]
: var4: 0.035804 0.27714 [ -1.0000 1.0000 ]
: -----------------------------------------------------------
: Start of deep neural network training on single thread CPU (without ROOT-MT support)
:
TFHandler_DL_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.12276 0.27828 [ -1.0000 1.0000 ]
: var2: 0.075909 0.27943 [ -1.0000 1.0000 ]
: var3: -0.010745 0.29684 [ -1.0000 1.0000 ]
: var4: 0.035804 0.27714 [ -1.0000 1.0000 ]
: -----------------------------------------------------------
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 4 Input = ( 1, 1, 4 ) Batch size = 100 Loss function = C
Layer 0 DENSE Layer: ( Input = 4 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Tanh
Layer 1 DENSE Layer: ( Input = 100 , Width = 50 ) Output = ( 1 , 100 , 50 ) Activation Function = Tanh
Layer 2 DENSE Layer: ( Input = 50 , Width = 10 ) Output = ( 1 , 100 , 10 ) Activation Function = Tanh
Layer 3 DENSE Layer: ( Input = 10 , Width = 4 ) Output = ( 1 , 100 , 4 ) Activation Function = Identity
: Using 3200 events for training and 800 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.705333
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.60711 0.535476 0.0534189 0.00486514 65906.3 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.512264 0.493368 0.0539599 0.00505981 65439.5 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.477101 0.463081 0.0545329 0.00492884 64510.8 0
: 4 Minimum Test error found - save the configuration
: 4 | 0.445253 0.435615 0.0551278 0.00499363 63828.7 0
: 5 Minimum Test error found - save the configuration
: 5 | 0.419336 0.413876 0.0546085 0.00494011 64427.3 0
: 6 Minimum Test error found - save the configuration
: 6 | 0.398756 0.396644 0.0552463 0.00522943 63978.4 0
: 7 Minimum Test error found - save the configuration
: 7 | 0.381564 0.380816 0.055199 0.00504981 63809.6 0
: 8 Minimum Test error found - save the configuration
: 8 | 0.36747 0.368971 0.0555818 0.00498368 63243.5 0
: 9 Minimum Test error found - save the configuration
: 9 | 0.356478 0.359565 0.0550012 0.00500139 64000.2 0
: 10 Minimum Test error found - save the configuration
: 10 | 0.347372 0.349822 0.0562309 0.00502515 62493 0
: 11 Minimum Test error found - save the configuration
: 11 | 0.338681 0.3428 0.0559854 0.00548596 63367 0
: 12 Minimum Test error found - save the configuration
: 12 | 0.3311 0.33556 0.0560589 0.0050539 62738.9 0
: 13 Minimum Test error found - save the configuration
: 13 | 0.324531 0.329917 0.0605069 0.00508107 57734.8 0
: 14 Minimum Test error found - save the configuration
: 14 | 0.318429 0.322525 0.0554882 0.00508361 63486.2 0
: 15 Minimum Test error found - save the configuration
: 15 | 0.31256 0.317748 0.0575187 0.005071 61013.2 0
: 16 Minimum Test error found - save the configuration
: 16 | 0.306383 0.31039 0.0559901 0.00510725 62889.6 0
: 17 Minimum Test error found - save the configuration
: 17 | 0.301644 0.306566 0.0555634 0.00504758 63346.5 0
: 18 Minimum Test error found - save the configuration
: 18 | 0.297762 0.301882 0.0561245 0.00509922 62714 0
: 19 Minimum Test error found - save the configuration
: 19 | 0.292709 0.297873 0.0560661 0.00506974 62749.6 0
: 20 Minimum Test error found - save the configuration
: 20 | 0.28918 0.294416 0.0566544 0.00511855 62092.7 0
:
: Elapsed time for training with 4000 events: 1.13 sec
: Dataset[dataset] : Create results for training
: Dataset[dataset] : Multiclass evaluation of DL_CPU on training sample
: Dataset[dataset] : Elapsed time for evaluation of 4000 events: 0.0817 sec
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAMulticlass_DL_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAMulticlass_DL_CPU.class.C␛[0m
Factory : Training finished
:
: Ranking input variables (method specific)...
BDTG : Ranking result (top variable is best ranked)
: --------------------------------------
: Rank : Variable : Variable Importance
: --------------------------------------
: 1 : var4 : 3.060e-01
: 2 : var1 : 2.473e-01
: 3 : var2 : 2.400e-01
: 4 : var3 : 2.067e-01
: --------------------------------------
MLP : Ranking result (top variable is best ranked)
: -----------------------------
: Rank : Variable : Importance
: -----------------------------
: 1 : var4 : 5.440e+01
: 2 : var1 : 2.568e+01
: 3 : var2 : 2.223e+01
: 4 : var3 : 7.204e+00
: -----------------------------
PDEFoam : Ranking result (top variable is best ranked)
: --------------------------------------
: Rank : Variable : Variable Importance
: --------------------------------------
: 1 : var4 : 2.756e-01
: 2 : var1 : 2.691e-01
: 3 : var2 : 2.402e-01
: 4 : var3 : 2.151e-01
: --------------------------------------
: No variable ranking supplied by classifier: DL_CPU
TH1.Print Name = TrainingHistory_DL_CPU_trainingError, Entries= 0, Total sum= 7.42568
TH1.Print Name = TrainingHistory_DL_CPU_valError, Entries= 0, Total sum= 7.35691
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVAMulticlass_BDTG.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAMulticlass_MLP.weights.xml␛[0m
MLP : Building Network.
: Initializing weights
: Reading weight file: ␛[0;36mdataset/weights/TMVAMulticlass_PDEFoam.weights.xml␛[0m
: Read foams from file: ␛[0;36mdataset/weights/TMVAMulticlass_PDEFoam.weights_foams.root␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAMulticlass_DL_CPU.weights.xml␛[0m
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: BDTG for Multiclass classification performance
:
: Dataset[dataset] : Create results for testing
: Dataset[dataset] : Multiclass evaluation of BDTG on testing sample
: Dataset[dataset] : Elapsed time for evaluation of 4000 events: 0.426 sec
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
Factory : Test method: MLP for Multiclass classification performance
:
: Dataset[dataset] : Create results for testing
: Dataset[dataset] : Multiclass evaluation of MLP on testing sample
: Dataset[dataset] : Elapsed time for evaluation of 4000 events: 0.00475 sec
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
Factory : Test method: PDEFoam for Multiclass classification performance
:
: Dataset[dataset] : Create results for testing
: Dataset[dataset] : Multiclass evaluation of PDEFoam on testing sample
: Dataset[dataset] : Elapsed time for evaluation of 4000 events: 0.0492 sec
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
Factory : Test method: DL_CPU for Multiclass classification performance
:
: Dataset[dataset] : Create results for testing
: Dataset[dataset] : Multiclass evaluation of DL_CPU on testing sample
: Dataset[dataset] : Elapsed time for evaluation of 4000 events: 0.0798 sec
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
Factory : ␛[1mEvaluate all methods␛[0m
: Evaluate multiclass classification method: BDTG
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
TFHandler_BDTG : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.065615 1.0061 [ -4.0592 3.5808 ]
: var2: 0.29707 1.0658 [ -3.6952 3.7877 ]
: var3: 0.13183 1.1245 [ -4.5727 4.5640 ]
: var4: -0.071010 1.2654 [ -4.8486 5.0412 ]
: -----------------------------------------------------------
: Evaluate multiclass classification method: MLP
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
TFHandler_MLP : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.065615 1.0061 [ -4.0592 3.5808 ]
: var2: 0.29707 1.0658 [ -3.6952 3.7877 ]
: var3: 0.13183 1.1245 [ -4.5727 4.5640 ]
: var4: -0.071010 1.2654 [ -4.8486 5.0412 ]
: -----------------------------------------------------------
: Evaluate multiclass classification method: PDEFoam
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
TFHandler_PDEFoam : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.065615 1.0061 [ -4.0592 3.5808 ]
: var2: 0.29707 1.0658 [ -3.6952 3.7877 ]
: var3: 0.13183 1.1245 [ -4.5727 4.5640 ]
: var4: -0.071010 1.2654 [ -4.8486 5.0412 ]
: -----------------------------------------------------------
: Evaluate multiclass classification method: DL_CPU
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
: Creating multiclass response histograms...
: Creating multiclass performance histograms...
TFHandler_DL_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.12643 0.27476 [ -1.0000 1.0864 ]
: var2: 0.066267 0.28510 [ -1.0016 1.0000 ]
: var3: -0.0035395 0.29791 [ -1.2498 1.1706 ]
: var4: 0.037349 0.27475 [ -1.0000 1.1474 ]
: -----------------------------------------------------------
TFHandler_DL_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: var1: 0.12643 0.27476 [ -1.0000 1.0864 ]
: var2: 0.066267 0.28510 [ -1.0016 1.0000 ]
: var3: -0.0035395 0.29791 [ -1.2498 1.1706 ]
: var4: 0.037349 0.27475 [ -1.0000 1.1474 ]
: -----------------------------------------------------------
:
: 1-vs-rest performance metrics per class
: -------------------------------------------------------------------------------------------------------
:
: Considers the listed class as signal and the other classes
: as background, reporting the resulting binary performance.
: A score of 0.820 (0.850) means 0.820 was acheived on the
: test set and 0.850 on the training set.
:
: Dataset MVA Method ROC AUC Sig eff@B=0.01 Sig eff@B=0.10 Sig eff@B=0.30
: Name: / Class: test (train) test (train) test (train) test (train)
:
: dataset BDTG
: ------------------------------
: Signal 0.967 (0.980) 0.496 (0.616) 0.910 (0.953) 0.994 (0.997)
: bg0 0.908 (0.927) 0.201 (0.331) 0.729 (0.777) 0.924 (0.944)
: bg1 0.945 (0.955) 0.413 (0.429) 0.833 (0.860) 0.970 (0.973)
: bg2 0.974 (0.984) 0.600 (0.677) 0.926 (0.973) 0.995 (0.998)
:
: dataset MLP
: ------------------------------
: Signal 0.975 (0.976) 0.591 (0.609) 0.931 (0.940) 0.997 (0.994)
: bg0 0.930 (0.934) 0.279 (0.389) 0.781 (0.789) 0.960 (0.951)
: bg1 0.963 (0.964) 0.494 (0.462) 0.889 (0.900) 0.990 (0.994)
: bg2 0.971 (0.977) 0.653 (0.697) 0.901 (0.900) 0.994 (1.000)
:
: dataset PDEFoam
: ------------------------------
: Signal 0.924 (0.931) 0.274 (0.374) 0.760 (0.781) 0.950 (0.963)
: bg0 0.843 (0.853) 0.113 (0.167) 0.596 (0.613) 0.824 (0.833)
: bg1 0.899 (0.909) 0.287 (0.290) 0.682 (0.740) 0.914 (0.920)
: bg2 0.971 (0.968) 0.488 (0.436) 0.934 (0.913) 0.996 (0.999)
:
: dataset DL_CPU
: ------------------------------
: Signal 0.959 (0.963) 0.311 (0.362) 0.912 (0.921) 0.992 (0.989)
: bg0 0.922 (0.915) 0.280 (0.318) 0.772 (0.755) 0.942 (0.925)
: bg1 0.952 (0.951) 0.368 (0.345) 0.873 (0.860) 0.984 (0.981)
: bg2 0.917 (0.933) 0.552 (0.568) 0.751 (0.780) 0.909 (0.931)
:
: -------------------------------------------------------------------------------------------------------
:
:
: Confusion matrices for all methods
: -------------------------------------------------------------------------------------------------------
:
: Does a binary comparison between the two classes given by a
: particular row-column combination. In each case, the class
: given by the row is considered signal while the class given
: by the column index is considered background.
:
: === Showing confusion matrix for method : BDTG
: (Signal Efficiency for Background Efficiency 0.01%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.489 (0.430) 0.864 (0.764) 0.784 (0.472)
: bg0 0.311 (0.181) - 0.207 (0.132) 0.694 (0.611)
: bg1 0.830 (0.834) 0.288 (0.339) - 0.668 (0.511)
: bg2 0.708 (0.593) 0.684 (0.593) 0.625 (0.600) -
:
: (Signal Efficiency for Background Efficiency 0.10%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.901 (0.852) 0.996 (0.993) 0.956 (0.892)
: bg0 0.810 (0.763) - 0.643 (0.601) 0.924 (0.904)
: bg1 0.984 (0.984) 0.716 (0.677) - 0.898 (0.843)
: bg2 0.983 (0.928) 0.982 (0.953) 0.948 (0.897) -
:
: (Signal Efficiency for Background Efficiency 0.30%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.981 (0.960) 1.000 (0.999) 0.999 (0.998)
: bg0 0.963 (0.927) - 0.852 (0.814) 0.990 (0.986)
: bg1 0.999 (0.998) 0.915 (0.888) - 0.984 (0.984)
: bg2 0.999 (0.996) 0.998 (0.995) 0.998 (0.993) -
:
: === Showing confusion matrix for method : MLP
: (Signal Efficiency for Background Efficiency 0.01%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.456 (0.481) 0.936 (0.943) 0.645 (0.548)
: bg0 0.421 (0.278) - 0.302 (0.229) 0.604 (0.477)
: bg1 0.913 (0.925) 0.261 (0.400) - 0.566 (0.602)
: bg2 0.675 (0.662) 0.710 (0.669) 0.696 (0.641) -
:
: (Signal Efficiency for Background Efficiency 0.10%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.875 (0.891) 0.999 (1.000) 0.920 (0.894)
: bg0 0.766 (0.755) - 0.696 (0.707) 0.909 (0.905)
: bg1 0.997 (0.992) 0.780 (0.790) - 0.901 (0.867)
: bg2 0.880 (0.890) 0.944 (0.933) 0.891 (0.886) -
:
: (Signal Efficiency for Background Efficiency 0.30%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.974 (0.972) 1.000 (1.000) 0.995 (0.998)
: bg0 0.954 (0.960) - 0.914 (0.914) 0.995 (0.992)
: bg1 0.999 (0.999) 0.958 (0.944) - 0.998 (0.996)
: bg2 0.999 (0.990) 1.000 (0.996) 0.999 (0.989) -
:
: === Showing confusion matrix for method : PDEFoam
: (Signal Efficiency for Background Efficiency 0.01%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.273 (0.124) 0.423 (0.441) 0.405 (0.491)
: bg0 0.156 (0.076) - 0.126 (0.077) 0.512 (0.496)
: bg1 0.480 (0.405) 0.197 (0.220) - 0.410 (0.350)
: bg2 0.462 (0.507) 0.462 (0.577) 0.401 (0.412) -
:
: (Signal Efficiency for Background Efficiency 0.10%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.673 (0.666) 0.826 (0.851) 0.835 (0.796)
: bg0 0.569 (0.562) - 0.528 (0.477) 0.814 (0.809)
: bg1 0.825 (0.811) 0.539 (0.520) - 0.808 (0.784)
: bg2 0.925 (0.934) 0.948 (0.969) 0.874 (0.876) -
:
: (Signal Efficiency for Background Efficiency 0.30%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.908 (0.894) 0.973 (0.965) 0.972 (0.962)
: bg0 0.781 (0.793) - 0.725 (0.740) 0.947 (0.931)
: bg1 0.955 (0.949) 0.840 (0.835) - 0.935 (0.928)
: bg2 0.999 (0.996) 0.999 (0.998) 0.999 (0.996) -
:
: === Showing confusion matrix for method : DL_CPU
: (Signal Efficiency for Background Efficiency 0.01%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.495 (0.513) 0.944 (0.956) 0.235 (0.197)
: bg0 0.339 (0.280) - 0.180 (0.247) 0.500 (0.389)
: bg1 0.866 (0.916) 0.156 (0.212) - 0.422 (0.394)
: bg2 0.618 (0.603) 0.564 (0.578) 0.386 (0.491) -
:
: (Signal Efficiency for Background Efficiency 0.10%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.893 (0.902) 0.993 (0.998) 0.793 (0.691)
: bg0 0.746 (0.765) - 0.702 (0.733) 0.762 (0.816)
: bg1 0.989 (0.988) 0.728 (0.768) - 0.823 (0.777)
: bg2 0.838 (0.804) 0.768 (0.734) 0.752 (0.724) -
:
: (Signal Efficiency for Background Efficiency 0.30%)
: ---------------------------------------------------
: Signal bg0 bg1 bg2
: test (train) test (train) test (train) test (train)
: Signal - 0.986 (0.974) 1.000 (1.000) 0.948 (0.944)
: bg0 0.915 (0.940) - 0.917 (0.935) 0.956 (0.954)
: bg1 0.999 (1.000) 0.954 (0.955) - 0.971 (0.974)
: bg2 0.943 (0.919) 0.947 (0.921) 0.889 (0.864) -
:
: -------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 4000 events
:
Dataset:dataset : Created tree 'TrainTree' with 4000 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m
==> Wrote root file: TMVAMulticlass.root
==> TMVAMulticlass is done!