Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TMVA_Higgs_Classification.py File Reference

Namespaces

namespace  TMVA_Higgs_Classification
 

Detailed Description

View in nbviewer Open in SWAN
Classification example of TMVA based on public Higgs UCI dataset

The UCI data set is a public HIGGS data set , see http://archive.ics.uci.edu/ml/datasets/HIGGS used in this paper: Baldi, P., P. Sadowski, and D. Whiteson. “Searching for Exotic Particles in High-energy Physics with Deep Learning.” Nature Communications 5 (July 2, 2014).

******************************************************************************
*Tree :sig_tree : tree *
*Entries : 10000 : Total = 1177229 bytes File Size = 785298 *
* : : Tree compression factor = 1.48 *
******************************************************************************
*Br 0 :Type : Type/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 307 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 130.54 *
*............................................................................*
*Br 1 :lepton_pT : lepton_pT/F *
*Entries : 10000 : Total Size= 40581 bytes File Size = 30464 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 2 :lepton_eta : lepton_eta/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 28650 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.40 *
*............................................................................*
*Br 3 :lepton_phi : lepton_phi/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 30508 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 4 :missing_energy_magnitude : missing_energy_magnitude/F *
*Entries : 10000 : Total Size= 40656 bytes File Size = 35749 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.12 *
*............................................................................*
*Br 5 :missing_energy_phi : missing_energy_phi/F *
*Entries : 10000 : Total Size= 40626 bytes File Size = 36766 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.09 *
*............................................................................*
*Br 6 :jet1_pt : jet1_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 32298 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.24 *
*............................................................................*
*Br 7 :jet1_eta : jet1_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28467 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.41 *
*............................................................................*
*Br 8 :jet1_phi : jet1_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30399 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 9 :jet1_b-tag : jet1_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 5087 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 7.88 *
*............................................................................*
*Br 10 :jet2_pt : jet2_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 31561 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.27 *
*............................................................................*
*Br 11 :jet2_eta : jet2_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28616 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.40 *
*............................................................................*
*Br 12 :jet2_phi : jet2_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30547 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 13 :jet2_b-tag : jet2_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 5031 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 7.97 *
*............................................................................*
*Br 14 :jet3_pt : jet3_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 30642 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 15 :jet3_eta : jet3_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28955 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.38 *
*............................................................................*
*Br 16 :jet3_phi : jet3_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30433 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 17 :jet3_b-tag : jet3_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 4879 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 8.22 *
*............................................................................*
*Br 18 :jet4_pt : jet4_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 29189 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.37 *
*............................................................................*
*Br 19 :jet4_eta : jet4_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 29311 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.37 *
*............................................................................*
*Br 20 :jet4_phi : jet4_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30525 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 21 :jet4_b-tag : jet4_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 4725 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 8.48 *
*............................................................................*
*Br 22 :m_jj : m_jj/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 34991 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.15 *
*............................................................................*
*Br 23 :m_jjj : m_jjj/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34460 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 24 :m_lv : m_lv/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 32232 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.24 *
*............................................................................*
*Br 25 :m_jlv : m_jlv/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34598 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 26 :m_bb : m_bb/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 35012 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.14 *
*............................................................................*
*Br 27 :m_wbb : m_wbb/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34493 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 28 :m_wwbb : m_wwbb/F *
*Entries : 10000 : Total Size= 40566 bytes File Size = 34410 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sig_tree of type Signal with 10000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg_tree of type Background with 10000 events
Factory : Booking method: ␛[1mLikelihood␛[0m
:
Factory : Booking method: ␛[1mFisher␛[0m
:
Factory : Booking method: ␛[1mBDT␛[0m
:
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree sig_tree
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree bkg_tree
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 7000
: Signal -- testing events : 3000
: Signal -- training and testing events: 10000
: Background -- training events : 7000
: Background -- testing events : 3000
: Background -- training and testing events: 10000
:
DataSetInfo : Correlation matrix (Signal):
: ----------------------------------------------------------------
: m_jj m_jjj m_lv m_jlv m_bb m_wbb m_wwbb
: m_jj: +1.000 +0.774 -0.004 +0.096 +0.024 +0.512 +0.533
: m_jjj: +0.774 +1.000 -0.010 +0.073 +0.152 +0.674 +0.668
: m_lv: -0.004 -0.010 +1.000 +0.121 -0.027 +0.009 +0.021
: m_jlv: +0.096 +0.073 +0.121 +1.000 +0.313 +0.544 +0.552
: m_bb: +0.024 +0.152 -0.027 +0.313 +1.000 +0.445 +0.333
: m_wbb: +0.512 +0.674 +0.009 +0.544 +0.445 +1.000 +0.915
: m_wwbb: +0.533 +0.668 +0.021 +0.552 +0.333 +0.915 +1.000
: ----------------------------------------------------------------
DataSetInfo : Correlation matrix (Background):
: ----------------------------------------------------------------
: m_jj m_jjj m_lv m_jlv m_bb m_wbb m_wwbb
: m_jj: +1.000 +0.808 +0.022 +0.150 +0.028 +0.407 +0.415
: m_jjj: +0.808 +1.000 +0.041 +0.206 +0.177 +0.569 +0.547
: m_lv: +0.022 +0.041 +1.000 +0.139 +0.037 +0.081 +0.085
: m_jlv: +0.150 +0.206 +0.139 +1.000 +0.309 +0.607 +0.557
: m_bb: +0.028 +0.177 +0.037 +0.309 +1.000 +0.625 +0.447
: m_wbb: +0.407 +0.569 +0.081 +0.607 +0.625 +1.000 +0.884
: m_wwbb: +0.415 +0.547 +0.085 +0.557 +0.447 +0.884 +1.000
: ----------------------------------------------------------------
DataSetFactory : [dataset] :
:
Factory : Booking method: ␛[1mDNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "G" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "1|1|7" [The Layout of the input]
: BatchLayout: "1|128|7" [The Layout of the batch]
: Layout: "DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0." [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
DNN_CPU : [dataset] : Create Transformation "G" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'm_jj' <---> Output : variable 'm_jj'
: Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
: Input : variable 'm_lv' <---> Output : variable 'm_lv'
: Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
: Input : variable 'm_bb' <---> Output : variable 'm_bb'
: Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
: Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
: Will now use the CPU architecture with BLAS and IMT support !
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 512
dense_1 (Dense) (None, 64) 4160
dense_2 (Dense) (None, 64) 4160
dense_3 (Dense) (None, 64) 4160
dense_4 (Dense) (None, 2) 130
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Factory : Booking method: ␛[1mPyKeras␛[0m
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Loading Keras Model
: Loaded model from file: model_higgs.h5
Factory : ␛[1mTrain all methods␛[0m
Factory : [dataset] : Create Transformation "I" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'm_jj' <---> Output : variable 'm_jj'
: Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
: Input : variable 'm_lv' <---> Output : variable 'm_lv'
: Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
: Input : variable 'm_bb' <---> Output : variable 'm_bb'
: Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
: Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0318 0.65629 [ 0.15106 16.132 ]
: m_jjj: 1.0217 0.37420 [ 0.34247 8.9401 ]
: m_lv: 1.0507 0.16678 [ 0.26679 3.6823 ]
: m_jlv: 1.0161 0.40288 [ 0.38441 6.5831 ]
: m_bb: 0.97707 0.53961 [ 0.080986 8.2551 ]
: m_wbb: 1.0358 0.36856 [ 0.38503 6.4013 ]
: m_wwbb: 0.96265 0.31608 [ 0.43228 4.5350 ]
: -----------------------------------------------------------
: Ranking input variables (method unspecific)...
IdTransformation : Ranking result (top variable is best ranked)
: -------------------------------
: Rank : Variable : Separation
: -------------------------------
: 1 : m_bb : 9.511e-02
: 2 : m_wbb : 4.268e-02
: 3 : m_wwbb : 4.178e-02
: 4 : m_jjj : 2.825e-02
: 5 : m_jlv : 1.999e-02
: 6 : m_jj : 3.834e-03
: 7 : m_lv : 3.699e-03
: -------------------------------
Factory : Train method: Likelihood for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ Likelihood ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: The maximum-likelihood classifier models the data with probability
: density functions (PDF) reproducing the signal and background
: distributions of the input variables. Correlations among the
: variables are ignored.
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Required for good performance are decorrelated input variables
: (PCA transformation via the option "VarTransform=Decorrelate"
: may be tried). Irreducible non-linear correlations may be reduced
: by precombining strongly correlated input variables, or by simply
: removing one of the variables.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: High fidelity PDF estimates are mandatory, i.e., sufficient training
: statistics is required to populate the tails of the distributions
: It would be a surprise if the default Spline or KDE kernel parameters
: provide a satisfying fit to the data. The user is advised to properly
: tune the events per bin and smooth options in the spline cases
: individually per variable. If the KDE kernel is used, the adaptive
: Gaussian kernel may lead to artefacts, so please always also try
: the non-adaptive one.
:
: All tuning parameters must be adjusted individually for each input
: variable!
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Filling reference histograms
: Building PDF out of reference histograms
: Elapsed time for training with 14000 events: 0.118 sec
Likelihood : [dataset] : Evaluation of Likelihood on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.0218 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.class.C␛[0m
: Higgs_ClassificationOutput.root:/dataset/Method_Likelihood/Likelihood
Factory : Training finished
:
Factory : Train method: Fisher for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ Fisher ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: Fisher discriminants select events by distinguishing the mean
: values of the signal and background distributions in a trans-
: formed variable space where linear correlations are removed.
:
: (More precisely: the "linear discriminator" determines
: an axis in the (correlated) hyperspace of the input
: variables such that, when projecting the output classes
: (signal and background) upon this axis, they are pushed
: as far as possible away from each other, while events
: of a same class are confined in a close vicinity. The
: linearity property of this classifier is reflected in the
: metric with which "far apart" and "close vicinity" are
: determined: the covariance matrix of the discriminating
: variable space.)
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Optimal performance for Fisher discriminants is obtained for
: linearly correlated Gaussian-distributed variables. Any deviation
: from this ideal reduces the achievable separation power. In
: particular, no discrimination at all is achieved for a variable
: that has the same sample mean for signal and background, even if
: the shapes of the distributions are very different. Thus, Fisher
: discriminants often benefit from suitable transformations of the
: input variables. For example, if a variable x in [-1,1] has a
: a parabolic signal distributions, and a uniform background
: distributions, their mean value is zero in both cases, leading
: to no separation. The simple transformation x -> |x| renders this
: variable powerful for the use in a Fisher discriminant.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: <None>
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
Fisher : Results for Fisher coefficients:
: -----------------------
: Variable: Coefficient:
: -----------------------
: m_jj: -0.051
: m_jjj: +0.192
: m_lv: +0.045
: m_jlv: +0.059
: m_bb: -0.211
: m_wbb: +0.549
: m_wwbb: -0.778
: (offset): +0.136
: -----------------------
: Elapsed time for training with 14000 events: 0.0109 sec
Fisher : [dataset] : Evaluation of Fisher on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.00391 sec
: <CreateMVAPdfs> Separation from histogram (PDF): 0.090 (0.000)
: Dataset[dataset] : Evaluation of Fisher on training sample
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.class.C␛[0m
Factory : Training finished
:
Factory : Train method: BDT for Classification
:
BDT : #events: (reweighted) sig: 7000 bkg: 7000
: #events: (unweighted) sig: 7000 bkg: 7000
: Training 200 Decision Trees ... patience please
: Elapsed time for training with 14000 events: 0.687 sec
BDT : [dataset] : Evaluation of BDT on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.111 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.class.C␛[0m
: Higgs_ClassificationOutput.root:/dataset/Method_BDT/BDT
Factory : Training finished
:
Factory : Train method: DNN_CPU for Classification
:
: Preparing the Gaussian transformation...
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0053380 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
: Start of deep neural network training on CPU using MT, nthreads = 1
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0053380 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 5 Input = ( 1, 1, 7 ) Batch size = 128 Loss function = C
Layer 0 DENSE Layer: ( Input = 7 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 1 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 2 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 3 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 4 DENSE Layer: ( Input = 64 , Width = 1 ) Output = ( 1 , 128 , 1 ) Activation Function = Identity
: Using 11200 events for training and 2800 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.685306
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.645862 0.615908 0.58776 0.0469368 20590.8 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.602635 0.608921 0.588551 0.0468746 20558.4 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.583226 0.590212 0.588226 0.0469469 20573.5 0
: 4 | 0.57685 0.591347 0.588506 0.0468732 20560 1
: 5 | 0.572273 0.596372 0.589344 0.0469016 20529.4 2
: 6 Minimum Test error found - save the configuration
: 6 | 0.568531 0.58526 0.589624 0.0471094 20526.7 0
: 7 | 0.56418 0.587706 0.589559 0.0469499 20523.1 1
: 8 | 0.561854 0.587134 0.59021 0.0469524 20498.6 2
: 9 | 0.559142 0.589072 0.589883 0.0469815 20512 3
: 10 Minimum Test error found - save the configuration
: 10 | 0.55582 0.580749 0.592523 0.0476445 20437.6 0
: 11 | 0.553521 0.588893 0.591513 0.047068 20453.8 1
: 12 Minimum Test error found - save the configuration
: 12 | 0.55191 0.580209 0.591182 0.0472738 20474.1 0
: 13 | 0.549158 0.587211 0.591571 0.0471624 20455.2 1
: 14 | 0.550706 0.588907 0.5927 0.0471519 20412.5 2
: 15 | 0.548672 0.583889 0.592375 0.047186 20426 3
: 16 | 0.544851 0.583737 0.592677 0.0472157 20415.8 4
: 17 | 0.543979 0.587281 0.594673 0.0473263 20345.4 5
: 18 | 0.542021 0.583335 0.59226 0.0472464 20432.5 6
: 19 | 0.540147 0.594703 0.592933 0.0473072 20409.6 7
: 20 | 0.540081 0.588194 0.593234 0.0472972 20398 8
:
: Elapsed time for training with 14000 events: 11.9 sec
: Evaluate deep neural network on CPU using batches with size = 128
:
DNN_CPU : [dataset] : Evaluation of DNN_CPU on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.248 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.class.C␛[0m
Factory : Training finished
:
Factory : Train method: PyKeras for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ PyKeras ] :␛[0m
:
: Keras is a high-level API for the Theano and Tensorflow packages.
: This method wraps the training and predictions steps of the Keras
: Python package for TMVA, so that dataloading, preprocessing and
: evaluation can be done within the TMVA system. To use this Keras
: interface, you have to generate a model with Keras first. Then,
: this model can be loaded and trained in TMVA.
:
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Split TMVA training data in 11200 training events and 2800 validation events
: Training Model Summary
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 512
dense_1 (Dense) (None, 64) 4160
dense_2 (Dense) (None, 64) 4160
dense_3 (Dense) (None, 64) 4160
dense_4 (Dense) (None, 2) 130
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
: Option SaveBestOnly: Only model weights with smallest validation loss will be stored
Epoch 1/20
1/112 [..............................] - ETA: 1:08 - loss: 0.6838 - accuracy: 0.5400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/112 [====>.........................] - ETA: 0s - loss: 0.6843 - accuracy: 0.5585 ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
49/112 [============>.................] - ETA: 0s - loss: 0.6782 - accuracy: 0.5680␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
77/112 [===================>..........] - ETA: 0s - loss: 0.6723 - accuracy: 0.5764␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
105/112 [===========================>..] - ETA: 0s - loss: 0.6689 - accuracy: 0.5832
Epoch 1: val_loss improved from inf to 0.64976, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 1s 4ms/step - loss: 0.6676 - accuracy: 0.5863 - val_loss: 0.6498 - val_accuracy: 0.6375
Epoch 2/20
1/112 [..............................] - ETA: 0s - loss: 0.6087 - accuracy: 0.6300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.6439 - accuracy: 0.6300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
65/112 [================>.............] - ETA: 0s - loss: 0.6474 - accuracy: 0.6242␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
96/112 [========================>.....] - ETA: 0s - loss: 0.6436 - accuracy: 0.6303
Epoch 2: val_loss improved from 0.64976 to 0.62901, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6412 - accuracy: 0.6323 - val_loss: 0.6290 - val_accuracy: 0.6439
Epoch 3/20
1/112 [..............................] - ETA: 0s - loss: 0.6105 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.6277 - accuracy: 0.6518␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
65/112 [================>.............] - ETA: 0s - loss: 0.6292 - accuracy: 0.6483␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
95/112 [========================>.....] - ETA: 0s - loss: 0.6268 - accuracy: 0.6515
Epoch 3: val_loss did not improve from 0.62901
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6267 - accuracy: 0.6520 - val_loss: 0.6325 - val_accuracy: 0.6393
Epoch 4/20
1/112 [..............................] - ETA: 0s - loss: 0.5172 - accuracy: 0.7600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6141 - accuracy: 0.6647␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
65/112 [================>.............] - ETA: 0s - loss: 0.6207 - accuracy: 0.6545␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
98/112 [=========================>....] - ETA: 0s - loss: 0.6193 - accuracy: 0.6579
Epoch 4: val_loss improved from 0.62901 to 0.61897, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6190 - accuracy: 0.6580 - val_loss: 0.6190 - val_accuracy: 0.6575
Epoch 5/20
1/112 [..............................] - ETA: 0s - loss: 0.6608 - accuracy: 0.6200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/112 [=======>......................] - ETA: 0s - loss: 0.6166 - accuracy: 0.6552␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/112 [===============>..............] - ETA: 0s - loss: 0.6164 - accuracy: 0.6569␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
87/112 [======================>.......] - ETA: 0s - loss: 0.6117 - accuracy: 0.6623
Epoch 5: val_loss improved from 0.61897 to 0.61809, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6122 - accuracy: 0.6635 - val_loss: 0.6181 - val_accuracy: 0.6525
Epoch 6/20
1/112 [..............................] - ETA: 0s - loss: 0.6179 - accuracy: 0.6600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/112 [=======>......................] - ETA: 0s - loss: 0.6146 - accuracy: 0.6587␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
59/112 [==============>...............] - ETA: 0s - loss: 0.6107 - accuracy: 0.6590␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
87/112 [======================>.......] - ETA: 0s - loss: 0.6087 - accuracy: 0.6659
Epoch 6: val_loss improved from 0.61809 to 0.61109, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6074 - accuracy: 0.6682 - val_loss: 0.6111 - val_accuracy: 0.6646
Epoch 7/20
1/112 [..............................] - ETA: 0s - loss: 0.6130 - accuracy: 0.6100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
29/112 [======>.......................] - ETA: 0s - loss: 0.6039 - accuracy: 0.6652␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
56/112 [==============>...............] - ETA: 0s - loss: 0.6025 - accuracy: 0.6721␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
81/112 [====================>.........] - ETA: 0s - loss: 0.6002 - accuracy: 0.6746␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
108/112 [===========================>..] - ETA: 0s - loss: 0.6009 - accuracy: 0.6725
Epoch 7: val_loss improved from 0.61109 to 0.60619, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6009 - accuracy: 0.6738 - val_loss: 0.6062 - val_accuracy: 0.6661
Epoch 8/20
1/112 [..............................] - ETA: 0s - loss: 0.5745 - accuracy: 0.7100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
29/112 [======>.......................] - ETA: 0s - loss: 0.6053 - accuracy: 0.6703␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
57/112 [==============>...............] - ETA: 0s - loss: 0.6015 - accuracy: 0.6747␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
86/112 [======================>.......] - ETA: 0s - loss: 0.6023 - accuracy: 0.6708
Epoch 8: val_loss improved from 0.60619 to 0.60221, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6007 - accuracy: 0.6720 - val_loss: 0.6022 - val_accuracy: 0.6689
Epoch 9/20
1/112 [..............................] - ETA: 0s - loss: 0.6879 - accuracy: 0.6000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/112 [=======>......................] - ETA: 0s - loss: 0.6070 - accuracy: 0.6593␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
58/112 [==============>...............] - ETA: 0s - loss: 0.6015 - accuracy: 0.6714␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
85/112 [=====================>........] - ETA: 0s - loss: 0.5970 - accuracy: 0.6759␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
111/112 [============================>.] - ETA: 0s - loss: 0.5976 - accuracy: 0.6746
Epoch 9: val_loss improved from 0.60221 to 0.60043, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5980 - accuracy: 0.6742 - val_loss: 0.6004 - val_accuracy: 0.6718
Epoch 10/20
1/112 [..............................] - ETA: 0s - loss: 0.5767 - accuracy: 0.7300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
29/112 [======>.......................] - ETA: 0s - loss: 0.5964 - accuracy: 0.6724␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
58/112 [==============>...............] - ETA: 0s - loss: 0.6081 - accuracy: 0.6659␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
86/112 [======================>.......] - ETA: 0s - loss: 0.6041 - accuracy: 0.6708␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - ETA: 0s - loss: 0.6010 - accuracy: 0.6709
Epoch 10: val_loss did not improve from 0.60043
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6010 - accuracy: 0.6709 - val_loss: 0.6281 - val_accuracy: 0.6518
Epoch 11/20
1/112 [..............................] - ETA: 0s - loss: 0.6280 - accuracy: 0.6200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
29/112 [======>.......................] - ETA: 0s - loss: 0.5902 - accuracy: 0.6817␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
57/112 [==============>...............] - ETA: 0s - loss: 0.5933 - accuracy: 0.6753␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
84/112 [=====================>........] - ETA: 0s - loss: 0.5952 - accuracy: 0.6733␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
109/112 [============================>.] - ETA: 0s - loss: 0.5940 - accuracy: 0.6747
Epoch 11: val_loss improved from 0.60043 to 0.59728, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5945 - accuracy: 0.6737 - val_loss: 0.5973 - val_accuracy: 0.6757
Epoch 12/20
1/112 [..............................] - ETA: 0s - loss: 0.6649 - accuracy: 0.5800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/112 [=======>......................] - ETA: 0s - loss: 0.5939 - accuracy: 0.6617␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
59/112 [==============>...............] - ETA: 0s - loss: 0.5973 - accuracy: 0.6669␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
85/112 [=====================>........] - ETA: 0s - loss: 0.5976 - accuracy: 0.6698␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - ETA: 0s - loss: 0.5949 - accuracy: 0.6721
Epoch 12: val_loss improved from 0.59728 to 0.59115, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5949 - accuracy: 0.6721 - val_loss: 0.5911 - val_accuracy: 0.6839
Epoch 13/20
1/112 [..............................] - ETA: 0s - loss: 0.5744 - accuracy: 0.6600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/112 [=======>......................] - ETA: 0s - loss: 0.5838 - accuracy: 0.6810␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
57/112 [==============>...............] - ETA: 0s - loss: 0.5856 - accuracy: 0.6830␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
86/112 [======================>.......] - ETA: 0s - loss: 0.5945 - accuracy: 0.6720
Epoch 13: val_loss did not improve from 0.59115
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5928 - accuracy: 0.6742 - val_loss: 0.5962 - val_accuracy: 0.6732
Epoch 14/20
1/112 [..............................] - ETA: 0s - loss: 0.5967 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/112 [=======>......................] - ETA: 0s - loss: 0.5872 - accuracy: 0.6913␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
59/112 [==============>...............] - ETA: 0s - loss: 0.5906 - accuracy: 0.6785␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
88/112 [======================>.......] - ETA: 0s - loss: 0.5906 - accuracy: 0.6775
Epoch 14: val_loss did not improve from 0.59115
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5882 - accuracy: 0.6789 - val_loss: 0.5928 - val_accuracy: 0.6761
Epoch 15/20
1/112 [..............................] - ETA: 0s - loss: 0.5286 - accuracy: 0.7300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/112 [=======>......................] - ETA: 0s - loss: 0.5806 - accuracy: 0.6803␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
58/112 [==============>...............] - ETA: 0s - loss: 0.5910 - accuracy: 0.6741␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
86/112 [======================>.......] - ETA: 0s - loss: 0.5873 - accuracy: 0.6794
Epoch 15: val_loss did not improve from 0.59115
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5888 - accuracy: 0.6796 - val_loss: 0.5960 - val_accuracy: 0.6782
Epoch 16/20
1/112 [..............................] - ETA: 0s - loss: 0.5610 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
27/112 [======>.......................] - ETA: 0s - loss: 0.5939 - accuracy: 0.6756␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
53/112 [=============>................] - ETA: 0s - loss: 0.5897 - accuracy: 0.6768␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
78/112 [===================>..........] - ETA: 0s - loss: 0.5855 - accuracy: 0.6791␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
106/112 [===========================>..] - ETA: 0s - loss: 0.5867 - accuracy: 0.6798
Epoch 16: val_loss improved from 0.59115 to 0.58522, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5853 - accuracy: 0.6811 - val_loss: 0.5852 - val_accuracy: 0.6811
Epoch 17/20
1/112 [..............................] - ETA: 0s - loss: 0.5298 - accuracy: 0.7500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5745 - accuracy: 0.6931␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.5767 - accuracy: 0.6914␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
93/112 [=======================>......] - ETA: 0s - loss: 0.5822 - accuracy: 0.6847
Epoch 17: val_loss did not improve from 0.58522
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5820 - accuracy: 0.6862 - val_loss: 0.5902 - val_accuracy: 0.6796
Epoch 18/20
1/112 [..............................] - ETA: 0s - loss: 0.5415 - accuracy: 0.7200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.5753 - accuracy: 0.7028␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.5678 - accuracy: 0.7049␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
94/112 [========================>.....] - ETA: 0s - loss: 0.5770 - accuracy: 0.6911
Epoch 18: val_loss did not improve from 0.58522
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5821 - accuracy: 0.6846 - val_loss: 0.5980 - val_accuracy: 0.6800
Epoch 19/20
1/112 [..............................] - ETA: 0s - loss: 0.6196 - accuracy: 0.6600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/112 [=======>......................] - ETA: 0s - loss: 0.5948 - accuracy: 0.6768␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
55/112 [=============>................] - ETA: 0s - loss: 0.5873 - accuracy: 0.6798␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
84/112 [=====================>........] - ETA: 0s - loss: 0.5811 - accuracy: 0.6833
Epoch 19: val_loss did not improve from 0.58522
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5802 - accuracy: 0.6841 - val_loss: 0.5931 - val_accuracy: 0.6732
Epoch 20/20
1/112 [..............................] - ETA: 0s - loss: 0.4977 - accuracy: 0.7500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/112 [=======>......................] - ETA: 0s - loss: 0.5930 - accuracy: 0.6690␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
61/112 [===============>..............] - ETA: 0s - loss: 0.5801 - accuracy: 0.6864␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
91/112 [=======================>......] - ETA: 0s - loss: 0.5851 - accuracy: 0.6819
Epoch 20: val_loss did not improve from 0.58522
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5845 - accuracy: 0.6822 - val_loss: 0.5929 - val_accuracy: 0.6818
: Getting training history for item:0 name = 'loss'
: Getting training history for item:1 name = 'accuracy'
: Getting training history for item:2 name = 'val_loss'
: Getting training history for item:3 name = 'val_accuracy'
: Elapsed time for training with 14000 events: 6.49 sec
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: trained_model_higgs.h5
PyKeras : [dataset] : Evaluation of PyKeras on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.28 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.class.C␛[0m
Factory : Training finished
:
: Ranking input variables (method specific)...
Likelihood : Ranking result (top variable is best ranked)
: -------------------------------------
: Rank : Variable : Delta Separation
: -------------------------------------
: 1 : m_bb : 4.061e-02
: 2 : m_wbb : 3.765e-02
: 3 : m_wwbb : 3.119e-02
: 4 : m_jj : -1.589e-03
: 5 : m_jjj : -2.901e-03
: 6 : m_lv : -7.919e-03
: 7 : m_jlv : -8.293e-03
: -------------------------------------
Fisher : Ranking result (top variable is best ranked)
: ---------------------------------
: Rank : Variable : Discr. power
: ---------------------------------
: 1 : m_bb : 1.279e-02
: 2 : m_wwbb : 9.131e-03
: 3 : m_wbb : 2.668e-03
: 4 : m_jlv : 9.145e-04
: 5 : m_jjj : 1.769e-04
: 6 : m_lv : 6.617e-05
: 7 : m_jj : 6.707e-06
: ---------------------------------
BDT : Ranking result (top variable is best ranked)
: ----------------------------------------
: Rank : Variable : Variable Importance
: ----------------------------------------
: 1 : m_bb : 2.089e-01
: 2 : m_wwbb : 1.673e-01
: 3 : m_wbb : 1.568e-01
: 4 : m_jlv : 1.560e-01
: 5 : m_jjj : 1.421e-01
: 6 : m_jj : 1.052e-01
: 7 : m_lv : 6.369e-02
: ----------------------------------------
: No variable ranking supplied by classifier: DNN_CPU
: No variable ranking supplied by classifier: PyKeras
TH1.Print Name = TrainingHistory_DNN_CPU_trainingError, Entries= 0, Total sum= 11.2554
TH1.Print Name = TrainingHistory_DNN_CPU_valError, Entries= 0, Total sum= 11.799
TH1.Print Name = TrainingHistory_PyKeras_'accuracy', Entries= 0, Total sum= 13.3479
TH1.Print Name = TrainingHistory_PyKeras_'loss', Entries= 0, Total sum= 12.0481
TH1.Print Name = TrainingHistory_PyKeras_'val_accuracy', Entries= 0, Total sum= 13.3368
TH1.Print Name = TrainingHistory_PyKeras_'val_loss', Entries= 0, Total sum= 12.1292
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: Likelihood for Classification performance
:
Likelihood : [dataset] : Evaluation of Likelihood on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0108 sec
Factory : Test method: Fisher for Classification performance
:
Fisher : [dataset] : Evaluation of Fisher on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.00343 sec
: Dataset[dataset] : Evaluation of Fisher on testing sample
Factory : Test method: BDT for Classification performance
:
BDT : [dataset] : Evaluation of BDT on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0457 sec
Factory : Test method: DNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.017919 1.0069 [ -3.3498 3.4247 ]
: m_jjj: 0.020352 1.0044 [ -3.2831 3.3699 ]
: m_lv: 0.016289 0.99263 [ -3.2339 3.3958 ]
: m_jlv: -0.018431 0.98242 [ -3.0632 5.7307 ]
: m_bb: 0.0069564 0.98851 [ -2.9734 3.3513 ]
: m_wbb: -0.010633 0.99340 [ -3.2442 3.2244 ]
: m_wwbb: -0.012669 0.99259 [ -3.1871 5.7307 ]
: -----------------------------------------------------------
DNN_CPU : [dataset] : Evaluation of DNN_CPU on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0995 sec
Factory : Test method: PyKeras for Classification performance
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: trained_model_higgs.h5
PyKeras : [dataset] : Evaluation of PyKeras on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.171 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: Likelihood
:
Likelihood : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_Likelihood : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: Fisher
:
Fisher : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Also filling probability and rarity histograms (on request)...
TFHandler_Fisher : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: BDT
:
BDT : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_BDT : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: DNN_CPU
:
DNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0053380 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.017919 1.0069 [ -3.3498 3.4247 ]
: m_jjj: 0.020352 1.0044 [ -3.2831 3.3699 ]
: m_lv: 0.016289 0.99263 [ -3.2339 3.3958 ]
: m_jlv: -0.018431 0.98242 [ -3.0632 5.7307 ]
: m_bb: 0.0069564 0.98851 [ -2.9734 3.3513 ]
: m_wbb: -0.010633 0.99340 [ -3.2442 3.2244 ]
: m_wwbb: -0.012669 0.99259 [ -3.1871 5.7307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: PyKeras
:
PyKeras : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_PyKeras : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset DNN_CPU : 0.762
: dataset BDT : 0.754
: dataset PyKeras : 0.750
: dataset Likelihood : 0.699
: dataset Fisher : 0.642
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset DNN_CPU : 0.137 (0.139) 0.406 (0.439) 0.676 (0.706)
: dataset BDT : 0.098 (0.099) 0.393 (0.402) 0.657 (0.681)
: dataset PyKeras : 0.115 (0.109) 0.409 (0.414) 0.660 (0.668)
: dataset Likelihood : 0.070 (0.075) 0.356 (0.363) 0.581 (0.597)
: dataset Fisher : 0.015 (0.015) 0.121 (0.131) 0.487 (0.506)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 6000 events
:
Dataset:dataset : Created tree 'TrainTree' with 14000 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m
## Declare Factory
## Create the Factory class. Later you can choose the methods
## whose performance you'd like to investigate.
## The factory is the major TMVA object you have to interact with. Here is the list of parameters you need to pass
## - The first argument is the base of the name of all the output
## weightfiles in the directory weight/ that will be created with the
## method parameters
## - The second argument is the output file for the training results
## - The third argument is a string option defining some general configuration for the TMVA session. For example all TMVA output can be suppressed by removing the "!" (not) in front of the "Silent" argument in the option string
import ROOT
import os
TMVA = ROOT.TMVA
TFile = ROOT.TFile
# options to control used methods
useLikelihood = True # likelihood based discriminant
useLikelihoodKDE = False # likelihood based discriminant
useFischer = True # Fischer discriminant
useMLP = False # Multi Layer Perceptron (old TMVA NN implementation)
useBDT = True # Boosted Decision Tree
useDL = True # TMVA Deep learning ( CPU or GPU)
useKeras = True # Use Keras Deep Learning via PyMVA
if ROOT.gSystem.GetFromPipe("root-config --has-tmva-pymva") == "yes":
else:
useKeras = False # cannot use Keras if PYMVA is not available
if useKeras:
try:
import tensorflow
except:
ROOT.Warning("TMVA_Higgs_Classification", "Skip using Keras since tensorflow is not available")
useKeras = False
outputFile = TFile.Open("Higgs_ClassificationOutput.root", "RECREATE")
factory = TMVA.Factory(
"TMVA_Higgs_Classification", outputFile, V=False, ROC=True, Silent=False, Color=True, AnalysisType="Classification"
)
## Setup Dataset(s)
# Define now input data file and signal and background trees
inputFileName = "Higgs_data.root"
inputFileLink = "http://root.cern.ch/files/" + inputFileName
if ROOT.gSystem.AccessPathName(inputFileName):
# file exists
ROOT.Info("TMVA_Higgs_Classification", "Download Higgs_data.root file")
inputFile = TFile.Open(inputFileLink, "CACHEREAD")
if inputFile is None:
raise FileNotFoundError("Input file cannot be downloaded - exit")
else:
inputFile = TFile.Open(inputFileName)
# --- Register the training and test trees
signalTree = inputFile.Get("sig_tree")
backgroundTree = inputFile.Get("bkg_tree")
signalTree.Print()
## Declare DataLoader(s)
# The next step is to declare the DataLoader class that deals with input variables
# Define the input variables that shall be used for the MVA training
# note that you may also use variable expressions, which can be parsed by TTree::Draw( "expression" )]
loader = TMVA.DataLoader("dataset")
loader.AddVariable("m_jj")
loader.AddVariable("m_jjj")
loader.AddVariable("m_lv")
loader.AddVariable("m_jlv")
loader.AddVariable("m_bb")
loader.AddVariable("m_wbb")
loader.AddVariable("m_wwbb")
# We set now the input data trees in the TMVA DataLoader class
# global event weights per tree (see below for setting event-wise weights)
signalWeight = 1.0
backgroundWeight = 1.0
# You can add an arbitrary number of signal or background trees
loader.AddSignalTree(signalTree, signalWeight)
loader.AddBackgroundTree(backgroundTree, backgroundWeight)
# Set individual event weights (the variables must exist in the original TTree)
# for signal : factory->SetSignalWeightExpression ("weight1*weight2");
# for background: factory->SetBackgroundWeightExpression("weight1*weight2");
# loader->SetBackgroundWeightExpression( "weight" );
# Apply additional cuts on the signal and background samples (can be different)
mycuts = ROOT.TCut("") # for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1";
mycutb = ROOT.TCut("") # for example: TCut mycutb = "abs(var1)<0.5";
# Tell the factory how to use the training and testing events
#
# If no numbers of events are given, half of the events in the tree are used
# for training, and the other half for testing:
# loader->PrepareTrainingAndTestTree( mycut, "SplitMode=random:!V" );
# To also specify the number of testing events, use:
loader.PrepareTrainingAndTestTree(
mycuts, mycutb, nTrain_Signal=7000, nTrain_Background=7000, SplitMode="Random", NormMode="NumEvents", V=False
)
## Booking Methods
# Here we book the TMVA methods. We book first a Likelihood based on KDE (Kernel Density Estimation), a Fischer discriminant, a BDT
# and a shallow neural network
# Likelihood ("naive Bayes estimator")
if useLikelihood:
factory.BookMethod(
loader,
TMVA.Types.kLikelihood,
"Likelihood",
H=True,
V=False,
TransformOutput=True,
PDFInterpol="Spline2:NSmoothSig[0]=20:NSmoothBkg[0]=20:NSmoothBkg[1]=10",
NSmooth=1,
NAvEvtPerBin=50,
)
# Use a kernel density estimator to approximate the PDFs
if useLikelihoodKDE:
factory.BookMethod(
loader,
TMVA.Types.kLikelihood,
"LikelihoodKDE",
H=False,
V=False,
TransformOutput=False,
PDFInterpol="KDE",
KDEtype="Gauss",
KDEiter="Adaptive",
KDEFineFactor=0.3,
KDEborder=None,
NAvEvtPerBin=50,
)
# Fisher discriminant (same as LD)
if useFischer:
factory.BookMethod(
loader,
TMVA.Types.kFisher,
"Fisher",
H=True,
V=False,
Fisher=True,
VarTransform=None,
CreateMVAPdfs=True,
PDFInterpolMVAPdf="Spline2",
NbinsMVAPdf=50,
NsmoothMVAPdf=10,
)
# Boosted Decision Trees
if useBDT:
factory.BookMethod(
loader,
TMVA.Types.kBDT,
"BDT",
V=False,
NTrees=200,
MinNodeSize="2.5%",
MaxDepth=2,
BoostType="AdaBoost",
AdaBoostBeta=0.5,
UseBaggedBoost=True,
BaggedSampleFraction=0.5,
SeparationType="GiniIndex",
nCuts=20,
)
# Multi-Layer Perceptron (Neural Network)
if useMLP:
factory.BookMethod(
loader,
TMVA.Types.kMLP,
"MLP",
H=False,
V=False,
NeuronType="tanh",
VarTransform="N",
NCycles=100,
HiddenLayers="N+5",
TestRate=5,
UseRegulator=False,
)
## Here we book the new DNN of TMVA if we have support in ROOT. We will use GPU version if ROOT is enabled with GPU
## Booking Deep Neural Network
# Here we define the option string for building the Deep Neural network model.
#### 1. Define DNN layout
# The DNN configuration is defined using a string. Note that whitespaces between characters are not allowed.
# We define first the DNN layout:
# - **input layout** : this defines the input data format for the DNN as ``input depth | height | width``.
# In case of a dense layer as first layer the input layout should be ``1 | 1 | number of input variables`` (features)
# - **batch layout** : this defines how are the input batch. It is related to input layout but not the same.
# If the first layer is dense it should be ``1 | batch size ! number of variables`` (features)
# *(note the use of the character `|` as separator of input parameters for DNN layout)*
# note that in case of only dense layer the input layout could be omitted but it is required when defining more
# complex architectures
# - **layer layout** string defining the layer architecture. The syntax is
# - layer type (e.g. DENSE, CONV, RNN)
# - layer parameters (e.g. number of units)
# - activation function (e.g TANH, RELU,...)
# *the different layers are separated by the ``","`` *
#### 2. Define Training Strategy
# We define here the training strategy parameters for the DNN. The parameters are separated by the ``","`` separator.
# One can then concatenate different training strategy with different parameters. The training strategy are separated by
# the ``"|"`` separator.
# - Optimizer
# - Learning rate
# - Momentum (valid for SGD and RMSPROP)
# - Regularization and Weight Decay
# - Dropout
# - Max number of epochs
# - Convergence steps. if the test error will not decrease after that value the training will stop
# - Batch size (This value must be the same specified in the input layout)
# - Test Repetitions (the interval when the test error will be computed)
#### 3. Define general DNN options
# We define the general DNN options concatenating in the final string the previously defined layout and training strategy.
# Note we use the ``":"`` separator to separate the different higher level options, as in the other TMVA methods.
# In addition to input layout, batch layout and training strategy we add now:
# - Type of Loss function (e.g. CROSSENTROPY)
# - Weight Initizalization (e.g XAVIER, XAVIERUNIFORM, NORMAL )
# - Variable Transformation
# - Type of Architecture (e.g. CPU, GPU, Standard)
# We can then book the DL method using the built option string
if useDL:
useDLGPU = ROOT.gSystem.GetFromPipe("root-config --has-tmva-gpu") == "yes"
# Define DNN layout
# Define Training strategies
# one can catenate several training strategies
training1 = ROOT.TString(
"LearningRate=1e-3,Momentum=0.9,"
"ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,"
"MaxEpochs=20,WeightDecay=1e-4,Regularization=None,"
"Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7," # ADAM default parameters
"DropConfig=0.0+0.0+0.0+0."
)
# training2 = ROOT.TString("LearningRate=1e-3,Momentum=0.9"
# "ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,"
# "MaxEpochs=20,WeightDecay=1e-4,Regularization=None,"
# "Optimizer=SGD,DropConfig=0.0+0.0+0.0+0.")
# General Options.
dnnMethodName = ROOT.TString("DNN_CPU")
if useDLGPU:
arch = "GPU"
dnnMethodName = "DNN_GPU"
else:
arch = "CPU"
factory.BookMethod(
loader,
TMVA.Types.kDL,
dnnMethodName,
H=False,
V=True,
ErrorStrategy="CROSSENTROPY",
VarTransform="G",
WeightInitialization="XAVIER",
InputLayout="1|1|7",
BatchLayout="1|128|7",
Layout="DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR",
TrainingStrategy=training1,
Architecture=arch,
)
#Keras DL
if useKeras:
ROOT.Info("TMVA_Higgs_Classification", "Building Deep Learning keras model")
# create Keras model with 4 layers of 64 units and relu activations
import tensorflow
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Input, Dense
model = Sequential()
model.add(Dense(64, activation='relu',input_dim=7))
model.add(Dense(64, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(2, activation='sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer = Adam(learning_rate = 0.001), weighted_metrics = ['accuracy'])
model.save('model_higgs.h5')
model.summary()
if not os.path.exists("model_higgs.h5"):
raise FileNotFoundError("Error creating Keras model file - skip using Keras")
else:
# book PyKeras method only if Keras model could be created
ROOT.Info("TMVA_Higgs_Classification", "Booking Deep Learning keras model")
factory.BookMethod(
loader,
TMVA.Types.kPyKeras,
"PyKeras",
H=True,
V=False,
VarTransform=None,
FilenameModel="model_higgs.h5",
FilenameTrainedModel="trained_model_higgs.h5",
NumEpochs=20,
BatchSize=100 )
# GpuOptions="allow_growth=True",
# ) # needed for RTX NVidia card and to avoid TF allocates all GPU memory
## Train Methods
# Here we train all the previously booked methods.
factory.TrainAllMethods()
## Test all methods
# Now we test and evaluate all methods using the test data set
factory.TestAllMethods()
factory.EvaluateAllMethods()
# after we get the ROC curve and we display
c1 = factory.GetROCCurve(loader)
c1.Draw()
# at the end we close the output file which contains the evaluation result of all methods and it can be used by TMVAGUI
# to display additional plots
outputFile.Close()
static Bool_t SetCacheFileDir(ROOT::Internal::TStringView cacheDir, Bool_t operateDisconnected=kTRUE, Bool_t forceCacheread=kFALSE)
Definition TFile.h:332
static TFile * Open(const char *name, Option_t *option="", const char *ftitle="", Int_t compress=ROOT::RCompressionSetting::EDefaults::kUseCompiledDefault, Int_t netopt=0)
Create / open a file.
Definition TFile.cxx:4075
This is the main MVA steering class.
Definition Factory.h:80
static void PyInitialize()
Initialize Python interpreter.
static Tools & Instance()
Definition Tools.cxx:71
Author
Harshal Shende

Definition in file TMVA_Higgs_Classification.py.