******************************************************************************
*Tree :sig_tree : tree *
*Entries : 10000 : Total = 1177229 bytes File Size = 785298 *
* : : Tree compression factor = 1.48 *
******************************************************************************
*Br 0 :Type : Type/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 307 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 130.54 *
*............................................................................*
*Br 1 :lepton_pT : lepton_pT/F *
*Entries : 10000 : Total Size= 40581 bytes File Size = 30464 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 2 :lepton_eta : lepton_eta/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 28650 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.40 *
*............................................................................*
*Br 3 :lepton_phi : lepton_phi/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 30508 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 4 :missing_energy_magnitude : missing_energy_magnitude/F *
*Entries : 10000 : Total Size= 40656 bytes File Size = 35749 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.12 *
*............................................................................*
*Br 5 :missing_energy_phi : missing_energy_phi/F *
*Entries : 10000 : Total Size= 40626 bytes File Size = 36766 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.09 *
*............................................................................*
*Br 6 :jet1_pt : jet1_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 32298 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.24 *
*............................................................................*
*Br 7 :jet1_eta : jet1_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28467 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.41 *
*............................................................................*
*Br 8 :jet1_phi : jet1_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30399 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 9 :jet1_b-tag : jet1_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 5087 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 7.88 *
*............................................................................*
*Br 10 :jet2_pt : jet2_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 31561 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.27 *
*............................................................................*
*Br 11 :jet2_eta : jet2_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28616 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.40 *
*............................................................................*
*Br 12 :jet2_phi : jet2_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30547 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 13 :jet2_b-tag : jet2_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 5031 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 7.97 *
*............................................................................*
*Br 14 :jet3_pt : jet3_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 30642 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 15 :jet3_eta : jet3_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 28955 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.38 *
*............................................................................*
*Br 16 :jet3_phi : jet3_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30433 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.32 *
*............................................................................*
*Br 17 :jet3_b-tag : jet3_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 4879 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 8.22 *
*............................................................................*
*Br 18 :jet4_pt : jet4_pt/F *
*Entries : 10000 : Total Size= 40571 bytes File Size = 29189 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.37 *
*............................................................................*
*Br 19 :jet4_eta : jet4_eta/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 29311 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.37 *
*............................................................................*
*Br 20 :jet4_phi : jet4_phi/F *
*Entries : 10000 : Total Size= 40576 bytes File Size = 30525 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.31 *
*............................................................................*
*Br 21 :jet4_b-tag : jet4_b-tag/F *
*Entries : 10000 : Total Size= 40586 bytes File Size = 4725 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 8.48 *
*............................................................................*
*Br 22 :m_jj : m_jj/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 34991 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.15 *
*............................................................................*
*Br 23 :m_jjj : m_jjj/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34460 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 24 :m_lv : m_lv/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 32232 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.24 *
*............................................................................*
*Br 25 :m_jlv : m_jlv/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34598 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 26 :m_bb : m_bb/F *
*Entries : 10000 : Total Size= 40556 bytes File Size = 35012 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.14 *
*............................................................................*
*Br 27 :m_wbb : m_wbb/F *
*Entries : 10000 : Total Size= 40561 bytes File Size = 34493 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
*Br 28 :m_wwbb : m_wwbb/F *
*Entries : 10000 : Total Size= 40566 bytes File Size = 34410 *
*Baskets : 1 : Basket Size= 1500672 bytes Compression= 1.16 *
*............................................................................*
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sig_tree of type Signal with 10000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg_tree of type Background with 10000 events
Factory : Booking method: ␛[1mLikelihood␛[0m
:
Factory : Booking method: ␛[1mFisher␛[0m
:
Factory : Booking method: ␛[1mBDT␛[0m
:
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree sig_tree
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree bkg_tree
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 7000
: Signal -- testing events : 3000
: Signal -- training and testing events: 10000
: Background -- training events : 7000
: Background -- testing events : 3000
: Background -- training and testing events: 10000
:
DataSetInfo : Correlation matrix (Signal):
: ----------------------------------------------------------------
: m_jj m_jjj m_lv m_jlv m_bb m_wbb m_wwbb
: m_jj: +1.000 +0.774 -0.004 +0.096 +0.024 +0.512 +0.533
: m_jjj: +0.774 +1.000 -0.010 +0.073 +0.152 +0.674 +0.668
: m_lv: -0.004 -0.010 +1.000 +0.121 -0.027 +0.009 +0.021
: m_jlv: +0.096 +0.073 +0.121 +1.000 +0.313 +0.544 +0.552
: m_bb: +0.024 +0.152 -0.027 +0.313 +1.000 +0.445 +0.333
: m_wbb: +0.512 +0.674 +0.009 +0.544 +0.445 +1.000 +0.915
: m_wwbb: +0.533 +0.668 +0.021 +0.552 +0.333 +0.915 +1.000
: ----------------------------------------------------------------
DataSetInfo : Correlation matrix (Background):
: ----------------------------------------------------------------
: m_jj m_jjj m_lv m_jlv m_bb m_wbb m_wwbb
: m_jj: +1.000 +0.808 +0.022 +0.150 +0.028 +0.407 +0.415
: m_jjj: +0.808 +1.000 +0.041 +0.206 +0.177 +0.569 +0.547
: m_lv: +0.022 +0.041 +1.000 +0.139 +0.037 +0.081 +0.085
: m_jlv: +0.150 +0.206 +0.139 +1.000 +0.309 +0.607 +0.557
: m_bb: +0.028 +0.177 +0.037 +0.309 +1.000 +0.625 +0.447
: m_wbb: +0.407 +0.569 +0.081 +0.607 +0.625 +1.000 +0.884
: m_wwbb: +0.415 +0.547 +0.085 +0.557 +0.447 +0.884 +1.000
: ----------------------------------------------------------------
DataSetFactory : [dataset] :
:
Factory : Booking method: ␛[1mDNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "G" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "1|1|7" [The Layout of the input]
: BatchLayout: "1|128|7" [The Layout of the batch]
: Layout: "DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0." [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
DNN_CPU : [dataset] : Create Transformation "G" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'm_jj' <---> Output : variable 'm_jj'
: Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
: Input : variable 'm_lv' <---> Output : variable 'm_lv'
: Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
: Input : variable 'm_bb' <---> Output : variable 'm_bb'
: Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
: Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
: Will now use the CPU architecture with BLAS and IMT support !
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 512
dense_1 (Dense) (None, 64) 4160
dense_2 (Dense) (None, 64) 4160
dense_3 (Dense) (None, 64) 4160
dense_4 (Dense) (None, 2) 130
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Factory : Booking method: ␛[1mPyKeras␛[0m
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Loading Keras Model
: Loaded model from file: model_higgs.h5
Factory : ␛[1mTrain all methods␛[0m
Factory : [dataset] : Create Transformation "I" with events from all classes.
:
: Transformation, Variable selection :
: Input : variable 'm_jj' <---> Output : variable 'm_jj'
: Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
: Input : variable 'm_lv' <---> Output : variable 'm_lv'
: Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
: Input : variable 'm_bb' <---> Output : variable 'm_bb'
: Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
: Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
TFHandler_Factory : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0318 0.65629 [ 0.15106 16.132 ]
: m_jjj: 1.0217 0.37420 [ 0.34247 8.9401 ]
: m_lv: 1.0507 0.16678 [ 0.26679 3.6823 ]
: m_jlv: 1.0161 0.40288 [ 0.38441 6.5831 ]
: m_bb: 0.97707 0.53961 [ 0.080986 8.2551 ]
: m_wbb: 1.0358 0.36856 [ 0.38503 6.4013 ]
: m_wwbb: 0.96265 0.31608 [ 0.43228 4.5350 ]
: -----------------------------------------------------------
: Ranking input variables (method unspecific)...
IdTransformation : Ranking result (top variable is best ranked)
: -------------------------------
: Rank : Variable : Separation
: -------------------------------
: 1 : m_bb : 9.511e-02
: 2 : m_wbb : 4.268e-02
: 3 : m_wwbb : 4.178e-02
: 4 : m_jjj : 2.825e-02
: 5 : m_jlv : 1.999e-02
: 6 : m_jj : 3.834e-03
: 7 : m_lv : 3.699e-03
: -------------------------------
Factory : Train method: Likelihood for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ Likelihood ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: The maximum-likelihood classifier models the data with probability
: density functions (PDF) reproducing the signal and background
: distributions of the input variables. Correlations among the
: variables are ignored.
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Required for good performance are decorrelated input variables
: (PCA transformation via the option "VarTransform=Decorrelate"
: may be tried). Irreducible non-linear correlations may be reduced
: by precombining strongly correlated input variables, or by simply
: removing one of the variables.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: High fidelity PDF estimates are mandatory, i.e., sufficient training
: statistics is required to populate the tails of the distributions
: It would be a surprise if the default Spline or KDE kernel parameters
: provide a satisfying fit to the data. The user is advised to properly
: tune the events per bin and smooth options in the spline cases
: individually per variable. If the KDE kernel is used, the adaptive
: Gaussian kernel may lead to artefacts, so please always also try
: the non-adaptive one.
:
: All tuning parameters must be adjusted individually for each input
: variable!
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Filling reference histograms
: Building PDF out of reference histograms
: Elapsed time for training with 14000 events: 0.109 sec
Likelihood : [dataset] : Evaluation of Likelihood on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.0202 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.class.C␛[0m
: Higgs_ClassificationOutput.root:/dataset/Method_Likelihood/Likelihood
Factory : Training finished
:
Factory : Train method: Fisher for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ Fisher ] :␛[0m
:
: ␛[1m--- Short description:␛[0m
:
: Fisher discriminants select events by distinguishing the mean
: values of the signal and background distributions in a trans-
: formed variable space where linear correlations are removed.
:
: (More precisely: the "linear discriminator" determines
: an axis in the (correlated) hyperspace of the input
: variables such that, when projecting the output classes
: (signal and background) upon this axis, they are pushed
: as far as possible away from each other, while events
: of a same class are confined in a close vicinity. The
: linearity property of this classifier is reflected in the
: metric with which "far apart" and "close vicinity" are
: determined: the covariance matrix of the discriminating
: variable space.)
:
: ␛[1m--- Performance optimisation:␛[0m
:
: Optimal performance for Fisher discriminants is obtained for
: linearly correlated Gaussian-distributed variables. Any deviation
: from this ideal reduces the achievable separation power. In
: particular, no discrimination at all is achieved for a variable
: that has the same sample mean for signal and background, even if
: the shapes of the distributions are very different. Thus, Fisher
: discriminants often benefit from suitable transformations of the
: input variables. For example, if a variable x in [-1,1] has a
: a parabolic signal distributions, and a uniform background
: distributions, their mean value is zero in both cases, leading
: to no separation. The simple transformation x -> |x| renders this
: variable powerful for the use in a Fisher discriminant.
:
: ␛[1m--- Performance tuning via configuration options:␛[0m
:
: <None>
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
Fisher : Results for Fisher coefficients:
: -----------------------
: Variable: Coefficient:
: -----------------------
: m_jj: -0.051
: m_jjj: +0.192
: m_lv: +0.045
: m_jlv: +0.059
: m_bb: -0.211
: m_wbb: +0.549
: m_wwbb: -0.778
: (offset): +0.136
: -----------------------
: Elapsed time for training with 14000 events: 0.0107 sec
Fisher : [dataset] : Evaluation of Fisher on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.00177 sec
: <CreateMVAPdfs> Separation from histogram (PDF): 0.090 (0.000)
: Dataset[dataset] : Evaluation of Fisher on training sample
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.class.C␛[0m
Factory : Training finished
:
Factory : Train method: BDT for Classification
:
BDT : #events: (reweighted) sig: 7000 bkg: 7000
: #events: (unweighted) sig: 7000 bkg: 7000
: Training 200 Decision Trees ... patience please
: Elapsed time for training with 14000 events: 0.677 sec
BDT : [dataset] : Evaluation of BDT on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.109 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.class.C␛[0m
: Higgs_ClassificationOutput.root:/dataset/Method_BDT/BDT
Factory : Training finished
:
Factory : Train method: DNN_CPU for Classification
:
: Preparing the Gaussian transformation...
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0054053 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
: Start of deep neural network training on CPU using MT, nthreads = 1
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0054053 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 5 Input = ( 1, 1, 7 ) Batch size = 128 Loss function = C
Layer 0 DENSE Layer: ( Input = 7 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 1 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 2 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 3 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 128 , 64 ) Activation Function = Tanh
Layer 4 DENSE Layer: ( Input = 64 , Width = 1 ) Output = ( 1 , 128 , 1 ) Activation Function = Identity
: Using 11200 events for training and 2800 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.894274
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.668363 0.619336 0.589905 0.0471462 20517.4 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.60408 0.605112 0.587392 0.0471671 20613.6 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.58487 0.60422 0.588085 0.0472601 20590.8 0
: 4 Minimum Test error found - save the configuration
: 4 | 0.577802 0.591493 0.589767 0.0472807 20527.7 0
: 5 Minimum Test error found - save the configuration
: 5 | 0.571068 0.58658 0.589362 0.0473657 20546.3 0
: 6 | 0.568868 0.591234 0.589673 0.0472371 20529.6 1
: 7 | 0.564611 0.58916 0.59003 0.047269 20517.3 2
: 8 Minimum Test error found - save the configuration
: 8 | 0.56136 0.584442 0.59046 0.0474941 20509.6 0
: 9 | 0.560968 0.58789 0.590441 0.0473446 20504.6 1
: 10 | 0.557515 0.589124 0.590965 0.0473775 20486.1 2
: 11 | 0.554878 0.588683 0.590969 0.0474726 20489.6 3
: 12 | 0.552739 0.596349 0.591442 0.0474607 20471.3 4
: 13 | 0.55213 0.58663 0.592096 0.0475141 20448.7 5
: 14 | 0.551681 0.585618 0.592503 0.0476757 20439.5 6
: 15 | 0.548919 0.584684 0.592416 0.0475691 20438.8 7
: 16 | 0.549402 0.592374 0.593247 0.0475999 20408.8 8
: 17 | 0.549439 0.588678 0.592858 0.0476419 20424.9 9
: 18 | 0.543293 0.586319 0.593227 0.0476441 20411.2 10
: 19 | 0.544377 0.591711 0.593425 0.0476858 20405.4 11
:
: Elapsed time for training with 14000 events: 11.3 sec
: Evaluate deep neural network on CPU using batches with size = 128
:
DNN_CPU : [dataset] : Evaluation of DNN_CPU on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.245 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.class.C␛[0m
Factory : Training finished
:
Factory : Train method: PyKeras for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ PyKeras ] :␛[0m
:
: Keras is a high-level API for the Theano and Tensorflow packages.
: This method wraps the training and predictions steps of the Keras
: Python package for TMVA, so that dataloading, preprocessing and
: evaluation can be done within the TMVA system. To use this Keras
: interface, you have to generate a model with Keras first. Then,
: this model can be loaded and trained in TMVA.
:
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Split TMVA training data in 11200 training events and 2800 validation events
: Training Model Summary
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 512
dense_1 (Dense) (None, 64) 4160
dense_2 (Dense) (None, 64) 4160
dense_3 (Dense) (None, 64) 4160
dense_4 (Dense) (None, 2) 130
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
: Option SaveBestOnly: Only model weights with smallest validation loss will be stored
Epoch 1/20
1/112 [..............................] - ETA: 1:08 - loss: 0.6917 - accuracy: 0.4100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/112 [=====>........................] - ETA: 0s - loss: 0.6868 - accuracy: 0.5281 ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
53/112 [=============>................] - ETA: 0s - loss: 0.6801 - accuracy: 0.5543␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
76/112 [===================>..........] - ETA: 0s - loss: 0.6748 - accuracy: 0.5684␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
106/112 [===========================>..] - ETA: 0s - loss: 0.6720 - accuracy: 0.5786
Epoch 1: val_loss improved from inf to 0.65952, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 1s 5ms/step - loss: 0.6715 - accuracy: 0.5796 - val_loss: 0.6595 - val_accuracy: 0.6043
Epoch 2/20
1/112 [..............................] - ETA: 0s - loss: 0.6754 - accuracy: 0.6600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6607 - accuracy: 0.6000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.6558 - accuracy: 0.6062␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
94/112 [========================>.....] - ETA: 0s - loss: 0.6544 - accuracy: 0.6094
Epoch 2: val_loss improved from 0.65952 to 0.63951, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6525 - accuracy: 0.6129 - val_loss: 0.6395 - val_accuracy: 0.6404
Epoch 3/20
1/112 [..............................] - ETA: 0s - loss: 0.7118 - accuracy: 0.5100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.6336 - accuracy: 0.6488␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/112 [================>.............] - ETA: 0s - loss: 0.6317 - accuracy: 0.6495␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
96/112 [========================>.....] - ETA: 0s - loss: 0.6343 - accuracy: 0.6457
Epoch 3: val_loss improved from 0.63951 to 0.62635, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6332 - accuracy: 0.6466 - val_loss: 0.6264 - val_accuracy: 0.6521
Epoch 4/20
1/112 [..............................] - ETA: 0s - loss: 0.6416 - accuracy: 0.6100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.6265 - accuracy: 0.6558␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/112 [================>.............] - ETA: 0s - loss: 0.6316 - accuracy: 0.6494␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
93/112 [=======================>......] - ETA: 0s - loss: 0.6302 - accuracy: 0.6439
Epoch 4: val_loss did not improve from 0.62635
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6292 - accuracy: 0.6475 - val_loss: 0.6288 - val_accuracy: 0.6525
Epoch 5/20
1/112 [..............................] - ETA: 0s - loss: 0.6354 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6357 - accuracy: 0.6438␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.6268 - accuracy: 0.6548␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
92/112 [=======================>......] - ETA: 0s - loss: 0.6224 - accuracy: 0.6591
Epoch 5: val_loss did not improve from 0.62635
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6200 - accuracy: 0.6597 - val_loss: 0.6286 - val_accuracy: 0.6529
Epoch 6/20
1/112 [..............................] - ETA: 0s - loss: 0.6519 - accuracy: 0.5900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.6198 - accuracy: 0.6609␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
65/112 [================>.............] - ETA: 0s - loss: 0.6169 - accuracy: 0.6611␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
97/112 [========================>.....] - ETA: 0s - loss: 0.6148 - accuracy: 0.6641
Epoch 6: val_loss improved from 0.62635 to 0.61497, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6142 - accuracy: 0.6652 - val_loss: 0.6150 - val_accuracy: 0.6636
Epoch 7/20
1/112 [..............................] - ETA: 0s - loss: 0.6006 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/112 [=======>......................] - ETA: 0s - loss: 0.6096 - accuracy: 0.6635␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/112 [===============>..............] - ETA: 0s - loss: 0.6110 - accuracy: 0.6629␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
90/112 [=======================>......] - ETA: 0s - loss: 0.6128 - accuracy: 0.6627
Epoch 7: val_loss did not improve from 0.61497
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6101 - accuracy: 0.6646 - val_loss: 0.6252 - val_accuracy: 0.6486
Epoch 8/20
1/112 [..............................] - ETA: 0s - loss: 0.5916 - accuracy: 0.6400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/112 [=======>......................] - ETA: 0s - loss: 0.6110 - accuracy: 0.6594␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.6079 - accuracy: 0.6627␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
93/112 [=======================>......] - ETA: 0s - loss: 0.6063 - accuracy: 0.6682
Epoch 8: val_loss improved from 0.61497 to 0.60760, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6048 - accuracy: 0.6698 - val_loss: 0.6076 - val_accuracy: 0.6650
Epoch 9/20
1/112 [..............................] - ETA: 0s - loss: 0.6198 - accuracy: 0.6400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/112 [=======>......................] - ETA: 0s - loss: 0.6026 - accuracy: 0.6683␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/112 [===============>..............] - ETA: 0s - loss: 0.6034 - accuracy: 0.6703␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
95/112 [========================>.....] - ETA: 0s - loss: 0.6002 - accuracy: 0.6720
Epoch 9: val_loss improved from 0.60760 to 0.60388, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6019 - accuracy: 0.6715 - val_loss: 0.6039 - val_accuracy: 0.6593
Epoch 10/20
1/112 [..............................] - ETA: 0s - loss: 0.5900 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
34/112 [========>.....................] - ETA: 0s - loss: 0.5899 - accuracy: 0.6874␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
67/112 [================>.............] - ETA: 0s - loss: 0.5949 - accuracy: 0.6828␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
98/112 [=========================>....] - ETA: 0s - loss: 0.5965 - accuracy: 0.6764
Epoch 10: val_loss improved from 0.60388 to 0.59962, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5979 - accuracy: 0.6757 - val_loss: 0.5996 - val_accuracy: 0.6686
Epoch 11/20
1/112 [..............................] - ETA: 0s - loss: 0.6350 - accuracy: 0.6100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.5896 - accuracy: 0.6873␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
66/112 [================>.............] - ETA: 0s - loss: 0.5934 - accuracy: 0.6805␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
100/112 [=========================>....] - ETA: 0s - loss: 0.5957 - accuracy: 0.6792
Epoch 11: val_loss did not improve from 0.59962
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5960 - accuracy: 0.6787 - val_loss: 0.6199 - val_accuracy: 0.6571
Epoch 12/20
1/112 [..............................] - ETA: 0s - loss: 0.6154 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.5965 - accuracy: 0.6773␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
66/112 [================>.............] - ETA: 0s - loss: 0.6009 - accuracy: 0.6717␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
99/112 [=========================>....] - ETA: 0s - loss: 0.5946 - accuracy: 0.6777
Epoch 12: val_loss improved from 0.59962 to 0.59783, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5949 - accuracy: 0.6764 - val_loss: 0.5978 - val_accuracy: 0.6611
Epoch 13/20
1/112 [..............................] - ETA: 0s - loss: 0.5644 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
34/112 [========>.....................] - ETA: 0s - loss: 0.5951 - accuracy: 0.6750␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
66/112 [================>.............] - ETA: 0s - loss: 0.5930 - accuracy: 0.6792␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
98/112 [=========================>....] - ETA: 0s - loss: 0.5915 - accuracy: 0.6792
Epoch 13: val_loss did not improve from 0.59783
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5897 - accuracy: 0.6821 - val_loss: 0.6065 - val_accuracy: 0.6657
Epoch 14/20
1/112 [..............................] - ETA: 0s - loss: 0.6162 - accuracy: 0.6300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
34/112 [========>.....................] - ETA: 0s - loss: 0.5829 - accuracy: 0.6868␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
67/112 [================>.............] - ETA: 0s - loss: 0.5894 - accuracy: 0.6803␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
99/112 [=========================>....] - ETA: 0s - loss: 0.5907 - accuracy: 0.6789
Epoch 14: val_loss did not improve from 0.59783
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5894 - accuracy: 0.6815 - val_loss: 0.6011 - val_accuracy: 0.6679
Epoch 15/20
1/112 [..............................] - ETA: 0s - loss: 0.7356 - accuracy: 0.5700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
34/112 [========>.....................] - ETA: 0s - loss: 0.5830 - accuracy: 0.6921␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
67/112 [================>.............] - ETA: 0s - loss: 0.5924 - accuracy: 0.6794␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
96/112 [========================>.....] - ETA: 0s - loss: 0.5897 - accuracy: 0.6812
Epoch 15: val_loss did not improve from 0.59783
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5870 - accuracy: 0.6840 - val_loss: 0.6081 - val_accuracy: 0.6654
Epoch 16/20
1/112 [..............................] - ETA: 0s - loss: 0.5691 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
35/112 [========>.....................] - ETA: 0s - loss: 0.5906 - accuracy: 0.6771␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
66/112 [================>.............] - ETA: 0s - loss: 0.5871 - accuracy: 0.6797␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
96/112 [========================>.....] - ETA: 0s - loss: 0.5846 - accuracy: 0.6837
Epoch 16: val_loss improved from 0.59783 to 0.59029, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5848 - accuracy: 0.6827 - val_loss: 0.5903 - val_accuracy: 0.6764
Epoch 17/20
1/112 [..............................] - ETA: 0s - loss: 0.5584 - accuracy: 0.6600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.5865 - accuracy: 0.6855␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/112 [================>.............] - ETA: 0s - loss: 0.5827 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
95/112 [========================>.....] - ETA: 0s - loss: 0.5854 - accuracy: 0.6859
Epoch 17: val_loss improved from 0.59029 to 0.58668, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5850 - accuracy: 0.6866 - val_loss: 0.5867 - val_accuracy: 0.6793
Epoch 18/20
1/112 [..............................] - ETA: 0s - loss: 0.5751 - accuracy: 0.7300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
35/112 [========>.....................] - ETA: 0s - loss: 0.5815 - accuracy: 0.6869␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
69/112 [=================>............] - ETA: 0s - loss: 0.5857 - accuracy: 0.6826␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
101/112 [==========================>...] - ETA: 0s - loss: 0.5811 - accuracy: 0.6876
Epoch 18: val_loss did not improve from 0.58668
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5794 - accuracy: 0.6889 - val_loss: 0.5907 - val_accuracy: 0.6732
Epoch 19/20
1/112 [..............................] - ETA: 0s - loss: 0.5608 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
34/112 [========>.....................] - ETA: 0s - loss: 0.5772 - accuracy: 0.6850␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
65/112 [================>.............] - ETA: 0s - loss: 0.5752 - accuracy: 0.6897␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
91/112 [=======================>......] - ETA: 0s - loss: 0.5797 - accuracy: 0.6874
Epoch 19: val_loss did not improve from 0.58668
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5812 - accuracy: 0.6862 - val_loss: 0.5919 - val_accuracy: 0.6754
Epoch 20/20
1/112 [..............................] - ETA: 0s - loss: 0.5723 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/112 [=======>......................] - ETA: 0s - loss: 0.5850 - accuracy: 0.6806␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
65/112 [================>.............] - ETA: 0s - loss: 0.5777 - accuracy: 0.6897␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
96/112 [========================>.....] - ETA: 0s - loss: 0.5746 - accuracy: 0.6939
Epoch 20: val_loss did not improve from 0.58668
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5797 - accuracy: 0.6889 - val_loss: 0.6006 - val_accuracy: 0.6725
: Getting training history for item:0 name = 'loss'
: Getting training history for item:1 name = 'accuracy'
: Getting training history for item:2 name = 'val_loss'
: Getting training history for item:3 name = 'val_accuracy'
: Elapsed time for training with 14000 events: 6.05 sec
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: trained_model_higgs.h5
PyKeras : [dataset] : Evaluation of PyKeras on training sample (14000 events)
: Elapsed time for evaluation of 14000 events: 0.262 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.class.C␛[0m
Factory : Training finished
:
: Ranking input variables (method specific)...
Likelihood : Ranking result (top variable is best ranked)
: -------------------------------------
: Rank : Variable : Delta Separation
: -------------------------------------
: 1 : m_bb : 3.688e-02
: 2 : m_wbb : 3.307e-02
: 3 : m_wwbb : 2.885e-02
: 4 : m_jjj : -1.155e-03
: 5 : m_jj : -1.436e-03
: 6 : m_lv : -5.963e-03
: 7 : m_jlv : -9.884e-03
: -------------------------------------
Fisher : Ranking result (top variable is best ranked)
: ---------------------------------
: Rank : Variable : Discr. power
: ---------------------------------
: 1 : m_bb : 1.279e-02
: 2 : m_wwbb : 9.131e-03
: 3 : m_wbb : 2.668e-03
: 4 : m_jlv : 9.145e-04
: 5 : m_jjj : 1.769e-04
: 6 : m_lv : 6.617e-05
: 7 : m_jj : 6.707e-06
: ---------------------------------
BDT : Ranking result (top variable is best ranked)
: ----------------------------------------
: Rank : Variable : Variable Importance
: ----------------------------------------
: 1 : m_bb : 2.089e-01
: 2 : m_wwbb : 1.673e-01
: 3 : m_wbb : 1.568e-01
: 4 : m_jlv : 1.560e-01
: 5 : m_jjj : 1.421e-01
: 6 : m_jj : 1.052e-01
: 7 : m_lv : 6.369e-02
: ----------------------------------------
: No variable ranking supplied by classifier: DNN_CPU
: No variable ranking supplied by classifier: PyKeras
TH1.Print Name = TrainingHistory_DNN_CPU_trainingError, Entries= 0, Total sum= 10.7664
TH1.Print Name = TrainingHistory_DNN_CPU_valError, Entries= 0, Total sum= 11.2496
TH1.Print Name = TrainingHistory_PyKeras_'accuracy', Entries= 0, Total sum= 13.3292
TH1.Print Name = TrainingHistory_PyKeras_'loss', Entries= 0, Total sum= 12.1024
TH1.Print Name = TrainingHistory_PyKeras_'val_accuracy', Entries= 0, Total sum= 13.2011
TH1.Print Name = TrainingHistory_PyKeras_'val_loss', Entries= 0, Total sum= 12.2277
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: Likelihood for Classification performance
:
Likelihood : [dataset] : Evaluation of Likelihood on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0113 sec
Factory : Test method: Fisher for Classification performance
:
Fisher : [dataset] : Evaluation of Fisher on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0033 sec
: Dataset[dataset] : Evaluation of Fisher on testing sample
Factory : Test method: BDT for Classification performance
:
BDT : [dataset] : Evaluation of BDT on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0459 sec
Factory : Test method: DNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.017919 1.0069 [ -3.3498 3.4247 ]
: m_jjj: 0.020352 1.0044 [ -3.2831 3.3699 ]
: m_lv: 0.016356 0.99266 [ -3.2339 3.3958 ]
: m_jlv: -0.018431 0.98242 [ -3.0632 5.7307 ]
: m_bb: 0.0069564 0.98851 [ -2.9734 3.3513 ]
: m_wbb: -0.010633 0.99340 [ -3.2442 3.2244 ]
: m_wwbb: -0.012669 0.99259 [ -3.1871 5.7307 ]
: -----------------------------------------------------------
DNN_CPU : [dataset] : Evaluation of DNN_CPU on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.0985 sec
Factory : Test method: PyKeras for Classification performance
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: trained_model_higgs.h5
PyKeras : [dataset] : Evaluation of PyKeras on testing sample (6000 events)
: Elapsed time for evaluation of 6000 events: 0.154 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: Likelihood
:
Likelihood : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_Likelihood : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: Fisher
:
Fisher : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Also filling probability and rarity histograms (on request)...
TFHandler_Fisher : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: BDT
:
BDT : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_BDT : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: DNN_CPU
:
DNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.0043655 0.99836 [ -3.2801 5.7307 ]
: m_jjj: 0.0044371 0.99827 [ -3.2805 5.7307 ]
: m_lv: 0.0054053 1.0003 [ -3.2810 5.7307 ]
: m_jlv: 0.0044637 0.99837 [ -3.2803 5.7307 ]
: m_bb: 0.0043676 0.99847 [ -3.2797 5.7307 ]
: m_wbb: 0.0042343 0.99744 [ -3.2803 5.7307 ]
: m_wwbb: 0.0046014 0.99948 [ -3.2802 5.7307 ]
: -----------------------------------------------------------
TFHandler_DNN_CPU : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 0.017919 1.0069 [ -3.3498 3.4247 ]
: m_jjj: 0.020352 1.0044 [ -3.2831 3.3699 ]
: m_lv: 0.016356 0.99266 [ -3.2339 3.3958 ]
: m_jlv: -0.018431 0.98242 [ -3.0632 5.7307 ]
: m_bb: 0.0069564 0.98851 [ -2.9734 3.3513 ]
: m_wbb: -0.010633 0.99340 [ -3.2442 3.2244 ]
: m_wwbb: -0.012669 0.99259 [ -3.1871 5.7307 ]
: -----------------------------------------------------------
Factory : Evaluate classifier: PyKeras
:
PyKeras : [dataset] : Loop over test events and fill histograms with classifier response...
:
TFHandler_PyKeras : Variable Mean RMS [ Min Max ]
: -----------------------------------------------------------
: m_jj: 1.0447 0.66216 [ 0.14661 10.222 ]
: m_jjj: 1.0275 0.37015 [ 0.34201 5.6016 ]
: m_lv: 1.0500 0.15582 [ 0.29757 2.8989 ]
: m_jlv: 1.0053 0.39478 [ 0.41660 5.8799 ]
: m_bb: 0.97464 0.52138 [ 0.10941 5.5163 ]
: m_wbb: 1.0296 0.35719 [ 0.38878 3.9747 ]
: m_wwbb: 0.95617 0.30368 [ 0.44118 4.0728 ]
: -----------------------------------------------------------
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset DNN_CPU : 0.755
: dataset BDT : 0.754
: dataset PyKeras : 0.752
: dataset Likelihood : 0.698
: dataset Fisher : 0.642
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset DNN_CPU : 0.124 (0.128) 0.405 (0.419) 0.656 (0.688)
: dataset BDT : 0.098 (0.099) 0.393 (0.402) 0.657 (0.681)
: dataset PyKeras : 0.112 (0.098) 0.389 (0.398) 0.659 (0.670)
: dataset Likelihood : 0.085 (0.082) 0.355 (0.363) 0.580 (0.596)
: dataset Fisher : 0.015 (0.015) 0.121 (0.131) 0.487 (0.506)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 6000 events
:
Dataset:dataset : Created tree 'TrainTree' with 14000 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m