******************************************************************************
*Tree    :sig_tree  : tree                                                   *
*Entries :    10000 : Total =         1177229 bytes  File  Size =     785298 *
*        :          : Tree compression factor =   1.48                       *
******************************************************************************
*Br    0 :Type      : Type/F                                                 *
*Entries :    10000 : Total  Size=      40556 bytes  File Size  =        307 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression= 130.54     *
*............................................................................*
*Br    1 :lepton_pT : lepton_pT/F                                            *
*Entries :    10000 : Total  Size=      40581 bytes  File Size  =      30464 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.32     *
*............................................................................*
*Br    2 :lepton_eta : lepton_eta/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =      28650 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.40     *
*............................................................................*
*Br    3 :lepton_phi : lepton_phi/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =      30508 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.31     *
*............................................................................*
*Br    4 :missing_energy_magnitude : missing_energy_magnitude/F              *
*Entries :    10000 : Total  Size=      40656 bytes  File Size  =      35749 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.12     *
*............................................................................*
*Br    5 :missing_energy_phi : missing_energy_phi/F                          *
*Entries :    10000 : Total  Size=      40626 bytes  File Size  =      36766 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.09     *
*............................................................................*
*Br    6 :jet1_pt   : jet1_pt/F                                              *
*Entries :    10000 : Total  Size=      40571 bytes  File Size  =      32298 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.24     *
*............................................................................*
*Br    7 :jet1_eta  : jet1_eta/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      28467 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.41     *
*............................................................................*
*Br    8 :jet1_phi  : jet1_phi/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      30399 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.32     *
*............................................................................*
*Br    9 :jet1_b-tag : jet1_b-tag/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =       5087 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   7.88     *
*............................................................................*
*Br   10 :jet2_pt   : jet2_pt/F                                              *
*Entries :    10000 : Total  Size=      40571 bytes  File Size  =      31561 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.27     *
*............................................................................*
*Br   11 :jet2_eta  : jet2_eta/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      28616 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.40     *
*............................................................................*
*Br   12 :jet2_phi  : jet2_phi/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      30547 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.31     *
*............................................................................*
*Br   13 :jet2_b-tag : jet2_b-tag/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =       5031 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   7.97     *
*............................................................................*
*Br   14 :jet3_pt   : jet3_pt/F                                              *
*Entries :    10000 : Total  Size=      40571 bytes  File Size  =      30642 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.31     *
*............................................................................*
*Br   15 :jet3_eta  : jet3_eta/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      28955 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.38     *
*............................................................................*
*Br   16 :jet3_phi  : jet3_phi/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      30433 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.32     *
*............................................................................*
*Br   17 :jet3_b-tag : jet3_b-tag/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =       4879 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   8.22     *
*............................................................................*
*Br   18 :jet4_pt   : jet4_pt/F                                              *
*Entries :    10000 : Total  Size=      40571 bytes  File Size  =      29189 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.37     *
*............................................................................*
*Br   19 :jet4_eta  : jet4_eta/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      29311 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.37     *
*............................................................................*
*Br   20 :jet4_phi  : jet4_phi/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      30525 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.31     *
*............................................................................*
*Br   21 :jet4_b-tag : jet4_b-tag/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =       4725 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   8.48     *
*............................................................................*
*Br   22 :m_jj      : m_jj/F                                                 *
*Entries :    10000 : Total  Size=      40556 bytes  File Size  =      34991 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.15     *
*............................................................................*
*Br   23 :m_jjj     : m_jjj/F                                                *
*Entries :    10000 : Total  Size=      40561 bytes  File Size  =      34460 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.16     *
*............................................................................*
*Br   24 :m_lv      : m_lv/F                                                 *
*Entries :    10000 : Total  Size=      40556 bytes  File Size  =      32232 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.24     *
*............................................................................*
*Br   25 :m_jlv     : m_jlv/F                                                *
*Entries :    10000 : Total  Size=      40561 bytes  File Size  =      34598 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.16     *
*............................................................................*
*Br   26 :m_bb      : m_bb/F                                                 *
*Entries :    10000 : Total  Size=      40556 bytes  File Size  =      35012 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.14     *
*............................................................................*
*Br   27 :m_wbb     : m_wbb/F                                                *
*Entries :    10000 : Total  Size=      40561 bytes  File Size  =      34493 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.16     *
*............................................................................*
*Br   28 :m_wwbb    : m_wwbb/F                                               *
*Entries :    10000 : Total  Size=      40566 bytes  File Size  =      34410 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.16     *
*............................................................................*
DataSetInfo              : [dataset] : Added class "Signal"
                         : Add Tree sig_tree of type Signal with 10000 events
DataSetInfo              : [dataset] : Added class "Background"
                         : Add Tree bkg_tree of type Background with 10000 events
Factory                  : Booking method: ␛[1mLikelihood␛[0m
                         : 
Factory                  : Booking method: ␛[1mFisher␛[0m
                         : 
Factory                  : Booking method: ␛[1mBDT␛[0m
                         : 
                         : Rebuilding Dataset dataset
                         : Building event vectors for type 2 Signal
                         : Dataset[dataset] :  create input formulas for tree sig_tree
                         : Building event vectors for type 2 Background
                         : Dataset[dataset] :  create input formulas for tree bkg_tree
DataSetFactory           : [dataset] : Number of events in input trees
                         : 
                         : 
                         : Number of training and testing events
                         : ---------------------------------------------------------------------------
                         : Signal     -- training events            : 7000
                         : Signal     -- testing events             : 3000
                         : Signal     -- training and testing events: 10000
                         : Background -- training events            : 7000
                         : Background -- testing events             : 3000
                         : Background -- training and testing events: 10000
                         : 
DataSetInfo              : Correlation matrix (Signal):
                         : ----------------------------------------------------------------
                         :             m_jj   m_jjj    m_lv   m_jlv    m_bb   m_wbb  m_wwbb
                         :    m_jj:  +1.000  +0.774  -0.004  +0.096  +0.024  +0.512  +0.533
                         :   m_jjj:  +0.774  +1.000  -0.010  +0.073  +0.152  +0.674  +0.668
                         :    m_lv:  -0.004  -0.010  +1.000  +0.121  -0.027  +0.009  +0.021
                         :   m_jlv:  +0.096  +0.073  +0.121  +1.000  +0.313  +0.544  +0.552
                         :    m_bb:  +0.024  +0.152  -0.027  +0.313  +1.000  +0.445  +0.333
                         :   m_wbb:  +0.512  +0.674  +0.009  +0.544  +0.445  +1.000  +0.915
                         :  m_wwbb:  +0.533  +0.668  +0.021  +0.552  +0.333  +0.915  +1.000
                         : ----------------------------------------------------------------
DataSetInfo              : Correlation matrix (Background):
                         : ----------------------------------------------------------------
                         :             m_jj   m_jjj    m_lv   m_jlv    m_bb   m_wbb  m_wwbb
                         :    m_jj:  +1.000  +0.808  +0.022  +0.150  +0.028  +0.407  +0.415
                         :   m_jjj:  +0.808  +1.000  +0.041  +0.206  +0.177  +0.569  +0.547
                         :    m_lv:  +0.022  +0.041  +1.000  +0.139  +0.037  +0.081  +0.085
                         :   m_jlv:  +0.150  +0.206  +0.139  +1.000  +0.309  +0.607  +0.557
                         :    m_bb:  +0.028  +0.177  +0.037  +0.309  +1.000  +0.625  +0.447
                         :   m_wbb:  +0.407  +0.569  +0.081  +0.607  +0.625  +1.000  +0.884
                         :  m_wwbb:  +0.415  +0.547  +0.085  +0.557  +0.447  +0.884  +1.000
                         : ----------------------------------------------------------------
DataSetFactory           : [dataset] :  
                         : 
Factory                  : Booking method: ␛[1mDNN_CPU␛[0m
                         : 
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=30,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
                         : The following options are set:
                         : - By User:
                         :     <none>
                         : - Default:
                         :     Boost_num: "0" [Number of times the classifier will be boosted]
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=30,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
                         : The following options are set:
                         : - By User:
                         :     V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
                         :     VarTransform: "G" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
                         :     H: "False" [Print method-specific help message]
                         :     InputLayout: "1|1|7" [The Layout of the input]
                         :     BatchLayout: "1|128|7" [The Layout of the batch]
                         :     Layout: "DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR" [Layout of the network.]
                         :     ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
                         :     WeightInitialization: "XAVIER" [Weight initialization strategy]
                         :     Architecture: "CPU" [Which architecture to perform the training on.]
                         :     TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=30,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0." [Defines the training strategies.]
                         : - Default:
                         :     VerbosityLevel: "Default" [Verbosity level]
                         :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
                         :     IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
                         :     RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
                         :     ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
DNN_CPU                  : [dataset] : Create Transformation "G" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'm_jj' <---> Output : variable 'm_jj'
                         : Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
                         : Input : variable 'm_lv' <---> Output : variable 'm_lv'
                         : Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
                         : Input : variable 'm_bb' <---> Output : variable 'm_bb'
                         : Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
                         : Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
                         : Will now use the CPU architecture with BLAS and IMT support !
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 64)                512       
                                                                 
 dense_1 (Dense)             (None, 64)                4160      
                                                                 
 dense_2 (Dense)             (None, 64)                4160      
                                                                 
 dense_3 (Dense)             (None, 64)                4160      
                                                                 
 dense_4 (Dense)             (None, 2)                 130       
                                                                 
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
(TString) "python3"[7]
Factory                  : Booking method: ␛[1mPyKeras␛[0m
                         : 
                         : Setting up tf.keras
                         : Using TensorFlow version 2
                         : Use Keras version from TensorFlow : tf.keras
                         : Applying GPU option:  gpu_options.allow_growth=True
                         :  Loading Keras Model 
                         : Loaded model from file: Higgs_model.h5
Factory                  : ␛[1mTrain all methods␛[0m
Factory                  : [dataset] : Create Transformation "I" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'm_jj' <---> Output : variable 'm_jj'
                         : Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
                         : Input : variable 'm_lv' <---> Output : variable 'm_lv'
                         : Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
                         : Input : variable 'm_bb' <---> Output : variable 'm_bb'
                         : Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
                         : Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
TFHandler_Factory        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0318    0.65629   [    0.15106     16.132 ]
                         :    m_jjj:     1.0217    0.37420   [    0.34247     8.9401 ]
                         :     m_lv:     1.0507    0.16678   [    0.26679     3.6823 ]
                         :    m_jlv:     1.0161    0.40288   [    0.38441     6.5831 ]
                         :     m_bb:    0.97707    0.53961   [   0.080986     8.2551 ]
                         :    m_wbb:     1.0358    0.36856   [    0.38503     6.4013 ]
                         :   m_wwbb:    0.96265    0.31608   [    0.43228     4.5350 ]
                         : -----------------------------------------------------------
                         : Ranking input variables (method unspecific)...
IdTransformation         : Ranking result (top variable is best ranked)
                         : -------------------------------
                         : Rank : Variable  : Separation
                         : -------------------------------
                         :    1 : m_bb      : 9.511e-02
                         :    2 : m_wbb     : 4.268e-02
                         :    3 : m_wwbb    : 4.178e-02
                         :    4 : m_jjj     : 2.825e-02
                         :    5 : m_jlv     : 1.999e-02
                         :    6 : m_jj      : 3.834e-03
                         :    7 : m_lv      : 3.699e-03
                         : -------------------------------
Factory                  : Train method: Likelihood for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ Likelihood ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : The maximum-likelihood classifier models the data with probability 
                         : density functions (PDF) reproducing the signal and background
                         : distributions of the input variables. Correlations among the 
                         : variables are ignored.
                         : 
                         : ␛[1m--- Performance optimisation:␛[0m
                         : 
                         : Required for good performance are decorrelated input variables
                         : (PCA transformation via the option "VarTransform=Decorrelate"
                         : may be tried). Irreducible non-linear correlations may be reduced
                         : by precombining strongly correlated input variables, or by simply
                         : removing one of the variables.
                         : 
                         : ␛[1m--- Performance tuning via configuration options:␛[0m
                         : 
                         : High fidelity PDF estimates are mandatory, i.e., sufficient training 
                         : statistics is required to populate the tails of the distributions
                         : It would be a surprise if the default Spline or KDE kernel parameters
                         : provide a satisfying fit to the data. The user is advised to properly
                         : tune the events per bin and smooth options in the spline cases
                         : individually per variable. If the KDE kernel is used, the adaptive
                         : Gaussian kernel may lead to artefacts, so please always also try
                         : the non-adaptive one.
                         : 
                         : All tuning parameters must be adjusted individually for each input
                         : variable!
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
                         : Filling reference histograms
                         : Building PDF out of reference histograms
                         : Elapsed time for training with 14000 events: 0.119 sec         
Likelihood               : [dataset] : Evaluation of Likelihood on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.0214 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.class.C␛[0m
                         : Higgs_ClassificationOutput.root:/dataset/Method_Likelihood/Likelihood
Factory                  : Training finished
                         : 
Factory                  : Train method: Fisher for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ Fisher ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : Fisher discriminants select events by distinguishing the mean 
                         : values of the signal and background distributions in a trans- 
                         : formed variable space where linear correlations are removed.
                         : 
                         :    (More precisely: the "linear discriminator" determines
                         :     an axis in the (correlated) hyperspace of the input 
                         :     variables such that, when projecting the output classes 
                         :     (signal and background) upon this axis, they are pushed 
                         :     as far as possible away from each other, while events
                         :     of a same class are confined in a close vicinity. The  
                         :     linearity property of this classifier is reflected in the 
                         :     metric with which "far apart" and "close vicinity" are 
                         :     determined: the covariance matrix of the discriminating
                         :     variable space.)
                         : 
                         : ␛[1m--- Performance optimisation:␛[0m
                         : 
                         : Optimal performance for Fisher discriminants is obtained for 
                         : linearly correlated Gaussian-distributed variables. Any deviation
                         : from this ideal reduces the achievable separation power. In 
                         : particular, no discrimination at all is achieved for a variable
                         : that has the same sample mean for signal and background, even if 
                         : the shapes of the distributions are very different. Thus, Fisher 
                         : discriminants often benefit from suitable transformations of the 
                         : input variables. For example, if a variable x in [-1,1] has a 
                         : a parabolic signal distributions, and a uniform background
                         : distributions, their mean value is zero in both cases, leading 
                         : to no separation. The simple transformation x -> |x| renders this 
                         : variable powerful for the use in a Fisher discriminant.
                         : 
                         : ␛[1m--- Performance tuning via configuration options:␛[0m
                         : 
                         : <None>
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
Fisher                   : Results for Fisher coefficients:
                         : -----------------------
                         : Variable:  Coefficient:
                         : -----------------------
                         :     m_jj:       -0.051
                         :    m_jjj:       +0.192
                         :     m_lv:       +0.045
                         :    m_jlv:       +0.059
                         :     m_bb:       -0.211
                         :    m_wbb:       +0.549
                         :   m_wwbb:       -0.778
                         : (offset):       +0.136
                         : -----------------------
                         : Elapsed time for training with 14000 events: 0.0126 sec         
Fisher                   : [dataset] : Evaluation of Fisher on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.0054 sec       
                         : <CreateMVAPdfs> Separation from histogram (PDF): 0.090 (0.000)
                         : Dataset[dataset] : Evaluation of Fisher on training sample
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: BDT for Classification
                         : 
BDT                      : #events: (reweighted) sig: 7000 bkg: 7000
                         : #events: (unweighted) sig: 7000 bkg: 7000
                         : Training 200 Decision Trees ... patience please
                         : Elapsed time for training with 14000 events: 0.703 sec         
BDT                      : [dataset] : Evaluation of BDT on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.111 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.class.C␛[0m
                         : Higgs_ClassificationOutput.root:/dataset/Method_BDT/BDT
Factory                  : Training finished
                         : 
Factory                  : Train method: DNN_CPU for Classification
                         : 
                         : Preparing the Gaussian transformation...
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:  0.0043655    0.99836   [    -3.2801     5.7307 ]
                         :    m_jjj:  0.0044371    0.99827   [    -3.2805     5.7307 ]
                         :     m_lv:  0.0053380     1.0003   [    -3.2810     5.7307 ]
                         :    m_jlv:  0.0044637    0.99837   [    -3.2803     5.7307 ]
                         :     m_bb:  0.0043676    0.99847   [    -3.2797     5.7307 ]
                         :    m_wbb:  0.0042343    0.99744   [    -3.2803     5.7307 ]
                         :   m_wwbb:  0.0046014    0.99948   [    -3.2802     5.7307 ]
                         : -----------------------------------------------------------
                         : Start of deep neural network training on CPU using MT,  nthreads = 1
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:  0.0043655    0.99836   [    -3.2801     5.7307 ]
                         :    m_jjj:  0.0044371    0.99827   [    -3.2805     5.7307 ]
                         :     m_lv:  0.0053380     1.0003   [    -3.2810     5.7307 ]
                         :    m_jlv:  0.0044637    0.99837   [    -3.2803     5.7307 ]
                         :     m_bb:  0.0043676    0.99847   [    -3.2797     5.7307 ]
                         :    m_wbb:  0.0042343    0.99744   [    -3.2803     5.7307 ]
                         :   m_wwbb:  0.0046014    0.99948   [    -3.2802     5.7307 ]
                         : -----------------------------------------------------------
                         : *****   Deep Learning Network *****
DEEP NEURAL NETWORK:   Depth = 5  Input = ( 1, 1, 7 )  Batch size = 128  Loss function = C
   Layer 0   DENSE Layer:   ( Input =     7 , Width =    64 )  Output = (  1 ,   128 ,    64 )   Activation Function = Tanh
   Layer 1   DENSE Layer:   ( Input =    64 , Width =    64 )  Output = (  1 ,   128 ,    64 )   Activation Function = Tanh
   Layer 2   DENSE Layer:   ( Input =    64 , Width =    64 )  Output = (  1 ,   128 ,    64 )   Activation Function = Tanh
   Layer 3   DENSE Layer:   ( Input =    64 , Width =    64 )  Output = (  1 ,   128 ,    64 )   Activation Function = Tanh
   Layer 4   DENSE Layer:   ( Input =    64 , Width =     1 )  Output = (  1 ,   128 ,     1 )   Activation Function = Identity
                         : Using 11200 events for training and 2800 for testing
                         : Compute initial loss  on the validation data 
                         : Training phase 1 of 1:  Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 1.04561
                         : --------------------------------------------------------------
                         :      Epoch |   Train Err.   Val. Err.  t(s)/epoch   t(s)/Loss   nEvents/s Conv. Steps
                         : --------------------------------------------------------------
                         :    Start epoch iteration ...
                         :          1 Minimum Test error found - save the configuration 
                         :          1 |     0.659333    0.625674    0.595767   0.0477647     20321.1           0
                         :          2 Minimum Test error found - save the configuration 
                         :          2 |     0.602929    0.602164    0.596301   0.0485176     20329.2           0
                         :          3 Minimum Test error found - save the configuration 
                         :          3 |     0.583361    0.591937    0.595931   0.0474207     20302.3           0
                         :          4 Minimum Test error found - save the configuration 
                         :          4 |     0.575233    0.586215    0.594703   0.0477769     20361.1           0
                         :          5 |     0.571843    0.587266    0.593576   0.0473492     20387.1           1
                         :          6 Minimum Test error found - save the configuration 
                         :          6 |     0.566105    0.584829    0.594941   0.0475681     20344.4           0
                         :          7 |     0.563724    0.586805    0.593431   0.0471669     20385.8           1
                         :          8 |       0.5596    0.593499    0.591023   0.0471405       20475           2
                         :          9 Minimum Test error found - save the configuration 
                         :          9 |     0.558547    0.584027    0.595089   0.0473919     20332.4           0
                         :         10 |     0.556198    0.592027    0.592572   0.0472457     20420.8           1
                         :         11 Minimum Test error found - save the configuration 
                         :         11 |     0.551908    0.582541    0.592416   0.0478308     20448.6           0
                         :         12 |     0.555592    0.587298    0.593741   0.0472727     20378.1           1
                         :         13 Minimum Test error found - save the configuration 
                         :         13 |     0.551249     0.57933    0.594601   0.0482756     20383.4           0
                         :         14 |     0.549339    0.583255    0.593272   0.0473175     20397.3           1
                         :         15 |     0.550046    0.581263    0.592875    0.047445     20416.9           2
                         :         16 |     0.548874     0.58499    0.593057   0.0473723     20407.4           3
                         :         17 |     0.546029    0.581396    0.595046    0.047435     20335.6           4
                         :         18 |     0.542724    0.581719    0.596096   0.0478005     20310.2           5
                         :         19 |     0.540695    0.579932     0.60052   0.0474215     20133.9           6
                         :         20 Minimum Test error found - save the configuration 
                         :         20 |     0.542326    0.578537    0.594627   0.0476508     20359.2           0
                         :         21 |       0.5374    0.584288    0.594087   0.0476617     20379.7           1
                         :         22 |     0.534263    0.589216    0.596763   0.0476434     20279.7           2
                         :         23 |     0.535294    0.591833    0.597797   0.0475517     20238.3           3
                         :         24 |     0.533185    0.582849    0.597373   0.0481546     20276.1           4
                         :         25 |     0.531513    0.584453    0.598683   0.0486661     20246.7           5
                         :         26 |     0.530377    0.589272    0.596037     0.04754     20302.8           6
                         :         27 |      0.52962    0.589394    0.595379   0.0475463     20327.4           7
                         :         28 |     0.528577    0.582856    0.596153   0.0475792     20299.9           8
                         :         29 |     0.529229    0.579171    0.596725   0.0476919     20282.9           9
                         :         30 |     0.523899    0.588075    0.596709   0.0476557     20282.2          10
                         : 
                         : Elapsed time for training with 14000 events: 18 sec         
                         : Evaluate deep neural network on CPU using batches with size = 128
                         : 
DNN_CPU                  : [dataset] : Evaluation of DNN_CPU on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.252 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: PyKeras for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ PyKeras ] :␛[0m
                         : 
                         : Keras is a high-level API for the Theano and Tensorflow packages.
                         : This method wraps the training and predictions steps of the Keras
                         : Python package for TMVA, so that dataloading, preprocessing and
                         : evaluation can be done within the TMVA system. To use this Keras
                         : interface, you have to generate a model with Keras first. Then,
                         : this model can be loaded and trained in TMVA.
                         : 
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
                         : Split TMVA training data in 11200 training events and 2800 validation events
                         : Training Model Summary
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 64)                512       
                                                                 
 dense_1 (Dense)             (None, 64)                4160      
                                                                 
 dense_2 (Dense)             (None, 64)                4160      
                                                                 
 dense_3 (Dense)             (None, 64)                4160      
                                                                 
 dense_4 (Dense)             (None, 2)                 130       
                                                                 
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
                         : Option SaveBestOnly: Only model weights with smallest validation loss will be stored
Epoch 1/20
 
  1/112 [..............................] - ETA: 1:11 - loss: 0.6932 - accuracy: 0.5300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 25/112 [=====>........................] - ETA: 0s - loss: 0.6884 - accuracy: 0.5260  ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 53/112 [=============>................] - ETA: 0s - loss: 0.6809 - accuracy: 0.5596␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 79/112 [====================>.........] - ETA: 0s - loss: 0.6747 - accuracy: 0.5719␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
106/112 [===========================>..] - ETA: 0s - loss: 0.6706 - accuracy: 0.5814
Epoch 1: val_loss improved from inf to 0.65112, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 1s 5ms/step - loss: 0.6698 - accuracy: 0.5821 - val_loss: 0.6511 - val_accuracy: 0.6157
Epoch 2/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6263 - accuracy: 0.6600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 29/112 [======>.......................] - ETA: 0s - loss: 0.6438 - accuracy: 0.6241␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 58/112 [==============>...............] - ETA: 0s - loss: 0.6451 - accuracy: 0.6210␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 86/112 [======================>.......] - ETA: 0s - loss: 0.6400 - accuracy: 0.6292␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - ETA: 0s - loss: 0.6403 - accuracy: 0.6292
Epoch 2: val_loss improved from 0.65112 to 0.63933, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6403 - accuracy: 0.6292 - val_loss: 0.6393 - val_accuracy: 0.6389
Epoch 3/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6997 - accuracy: 0.5700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 30/112 [=======>......................] - ETA: 0s - loss: 0.6319 - accuracy: 0.6403␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 59/112 [==============>...............] - ETA: 0s - loss: 0.6382 - accuracy: 0.6363␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 87/112 [======================>.......] - ETA: 0s - loss: 0.6370 - accuracy: 0.6374
Epoch 3: val_loss improved from 0.63933 to 0.63914, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6330 - accuracy: 0.6408 - val_loss: 0.6391 - val_accuracy: 0.6311
Epoch 4/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6344 - accuracy: 0.6100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 31/112 [=======>......................] - ETA: 0s - loss: 0.6308 - accuracy: 0.6400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 62/112 [===============>..............] - ETA: 0s - loss: 0.6235 - accuracy: 0.6529␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 92/112 [=======================>......] - ETA: 0s - loss: 0.6212 - accuracy: 0.6526
Epoch 4: val_loss improved from 0.63914 to 0.61920, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6216 - accuracy: 0.6521 - val_loss: 0.6192 - val_accuracy: 0.6496
Epoch 5/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6005 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 31/112 [=======>......................] - ETA: 0s - loss: 0.6092 - accuracy: 0.6681␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 61/112 [===============>..............] - ETA: 0s - loss: 0.6141 - accuracy: 0.6603␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 91/112 [=======================>......] - ETA: 0s - loss: 0.6165 - accuracy: 0.6566
Epoch 5: val_loss improved from 0.61920 to 0.61619, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6147 - accuracy: 0.6606 - val_loss: 0.6162 - val_accuracy: 0.6529
Epoch 6/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6636 - accuracy: 0.6100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 30/112 [=======>......................] - ETA: 0s - loss: 0.6070 - accuracy: 0.6693␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 58/112 [==============>...............] - ETA: 0s - loss: 0.6113 - accuracy: 0.6628␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 87/112 [======================>.......] - ETA: 0s - loss: 0.6145 - accuracy: 0.6587
Epoch 6: val_loss did not improve from 0.61619
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6127 - accuracy: 0.6625 - val_loss: 0.6197 - val_accuracy: 0.6589
Epoch 7/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6427 - accuracy: 0.6600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 29/112 [======>.......................] - ETA: 0s - loss: 0.6119 - accuracy: 0.6690␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 57/112 [==============>...............] - ETA: 0s - loss: 0.6102 - accuracy: 0.6651␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 85/112 [=====================>........] - ETA: 0s - loss: 0.6083 - accuracy: 0.6711
Epoch 7: val_loss did not improve from 0.61619
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6086 - accuracy: 0.6688 - val_loss: 0.6287 - val_accuracy: 0.6421
Epoch 8/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5893 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 31/112 [=======>......................] - ETA: 0s - loss: 0.5902 - accuracy: 0.6874␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 60/112 [===============>..............] - ETA: 0s - loss: 0.6028 - accuracy: 0.6678␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 89/112 [======================>.......] - ETA: 0s - loss: 0.6029 - accuracy: 0.6688
Epoch 8: val_loss improved from 0.61619 to 0.60732, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6025 - accuracy: 0.6686 - val_loss: 0.6073 - val_accuracy: 0.6611
Epoch 9/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5950 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 31/112 [=======>......................] - ETA: 0s - loss: 0.6017 - accuracy: 0.6803␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 59/112 [==============>...............] - ETA: 0s - loss: 0.6060 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 89/112 [======================>.......] - ETA: 0s - loss: 0.6044 - accuracy: 0.6713
Epoch 9: val_loss did not improve from 0.60732
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.6025 - accuracy: 0.6745 - val_loss: 0.6109 - val_accuracy: 0.6618
Epoch 10/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5726 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 31/112 [=======>......................] - ETA: 0s - loss: 0.6021 - accuracy: 0.6739␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 60/112 [===============>..............] - ETA: 0s - loss: 0.5976 - accuracy: 0.6763␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 89/112 [======================>.......] - ETA: 0s - loss: 0.5971 - accuracy: 0.6766
Epoch 10: val_loss improved from 0.60732 to 0.60203, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5998 - accuracy: 0.6712 - val_loss: 0.6020 - val_accuracy: 0.6621
Epoch 11/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6007 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 32/112 [=======>......................] - ETA: 0s - loss: 0.5917 - accuracy: 0.6856␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 62/112 [===============>..............] - ETA: 0s - loss: 0.5918 - accuracy: 0.6845␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 91/112 [=======================>......] - ETA: 0s - loss: 0.5917 - accuracy: 0.6823
Epoch 11: val_loss improved from 0.60203 to 0.60110, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5940 - accuracy: 0.6804 - val_loss: 0.6011 - val_accuracy: 0.6675
Epoch 12/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5790 - accuracy: 0.7100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 29/112 [======>.......................] - ETA: 0s - loss: 0.6034 - accuracy: 0.6648␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 58/112 [==============>...............] - ETA: 0s - loss: 0.5945 - accuracy: 0.6776␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 88/112 [======================>.......] - ETA: 0s - loss: 0.5952 - accuracy: 0.6785
Epoch 12: val_loss did not improve from 0.60110
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5946 - accuracy: 0.6785 - val_loss: 0.6022 - val_accuracy: 0.6664
Epoch 13/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6651 - accuracy: 0.6100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 31/112 [=======>......................] - ETA: 0s - loss: 0.6074 - accuracy: 0.6716␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 59/112 [==============>...............] - ETA: 0s - loss: 0.6009 - accuracy: 0.6771␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 88/112 [======================>.......] - ETA: 0s - loss: 0.5961 - accuracy: 0.6814
Epoch 13: val_loss improved from 0.60110 to 0.60008, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5937 - accuracy: 0.6832 - val_loss: 0.6001 - val_accuracy: 0.6661
Epoch 14/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5808 - accuracy: 0.6500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 30/112 [=======>......................] - ETA: 0s - loss: 0.5849 - accuracy: 0.6857␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 59/112 [==============>...............] - ETA: 0s - loss: 0.5839 - accuracy: 0.6861␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 87/112 [======================>.......] - ETA: 0s - loss: 0.5891 - accuracy: 0.6818
Epoch 14: val_loss improved from 0.60008 to 0.59702, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5899 - accuracy: 0.6821 - val_loss: 0.5970 - val_accuracy: 0.6782
Epoch 15/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5755 - accuracy: 0.6600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 30/112 [=======>......................] - ETA: 0s - loss: 0.5976 - accuracy: 0.6733␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 59/112 [==============>...............] - ETA: 0s - loss: 0.5914 - accuracy: 0.6786␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 88/112 [======================>.......] - ETA: 0s - loss: 0.5931 - accuracy: 0.6785
Epoch 15: val_loss did not improve from 0.59702
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5927 - accuracy: 0.6799 - val_loss: 0.6029 - val_accuracy: 0.6689
Epoch 16/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5681 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 29/112 [======>.......................] - ETA: 0s - loss: 0.5905 - accuracy: 0.6855␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 59/112 [==============>...............] - ETA: 0s - loss: 0.5851 - accuracy: 0.6836␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 87/112 [======================>.......] - ETA: 0s - loss: 0.5881 - accuracy: 0.6808
Epoch 16: val_loss improved from 0.59702 to 0.59388, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5883 - accuracy: 0.6829 - val_loss: 0.5939 - val_accuracy: 0.6668
Epoch 17/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5629 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 31/112 [=======>......................] - ETA: 0s - loss: 0.5916 - accuracy: 0.6677␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 60/112 [===============>..............] - ETA: 0s - loss: 0.5846 - accuracy: 0.6825␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 88/112 [======================>.......] - ETA: 0s - loss: 0.5849 - accuracy: 0.6834
Epoch 17: val_loss did not improve from 0.59388
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5873 - accuracy: 0.6839 - val_loss: 0.5995 - val_accuracy: 0.6632
Epoch 18/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6002 - accuracy: 0.6100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 30/112 [=======>......................] - ETA: 0s - loss: 0.5876 - accuracy: 0.6807␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 59/112 [==============>...............] - ETA: 0s - loss: 0.5797 - accuracy: 0.6892␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 89/112 [======================>.......] - ETA: 0s - loss: 0.5810 - accuracy: 0.6881
Epoch 18: val_loss improved from 0.59388 to 0.58652, saving model to Higgs_trained_model.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5840 - accuracy: 0.6843 - val_loss: 0.5865 - val_accuracy: 0.6789
Epoch 19/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5630 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 30/112 [=======>......................] - ETA: 0s - loss: 0.5769 - accuracy: 0.6873␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 59/112 [==============>...............] - ETA: 0s - loss: 0.5766 - accuracy: 0.6895␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 88/112 [======================>.......] - ETA: 0s - loss: 0.5798 - accuracy: 0.6866
Epoch 19: val_loss did not improve from 0.58652
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5816 - accuracy: 0.6857 - val_loss: 0.5899 - val_accuracy: 0.6725
Epoch 20/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5240 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 29/112 [======>.......................] - ETA: 0s - loss: 0.5757 - accuracy: 0.6845␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 57/112 [==============>...............] - ETA: 0s - loss: 0.5757 - accuracy: 0.6895␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 85/112 [=====================>........] - ETA: 0s - loss: 0.5781 - accuracy: 0.6875
Epoch 20: val_loss did not improve from 0.58652
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5799 - accuracy: 0.6886 - val_loss: 0.5919 - val_accuracy: 0.6711
                         : Getting training history for item:0 name = 'loss'
                         : Getting training history for item:1 name = 'accuracy'
                         : Getting training history for item:2 name = 'val_loss'
                         : Getting training history for item:3 name = 'val_accuracy'
                         : Elapsed time for training with 14000 events: 6.57 sec         
                         : Setting up tf.keras
                         : Using TensorFlow version 2
                         : Use Keras version from TensorFlow : tf.keras
                         : Applying GPU option:  gpu_options.allow_growth=True
                         : Disabled TF eager execution when evaluating model 
                         :  Loading Keras Model 
                         : Loaded model from file: Higgs_trained_model.h5
PyKeras                  : [dataset] : Evaluation of PyKeras on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.265 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.class.C␛[0m
Factory                  : Training finished
                         : 
                         : Ranking input variables (method specific)...
Likelihood               : Ranking result (top variable is best ranked)
                         : -------------------------------------
                         : Rank : Variable  : Delta Separation
                         : -------------------------------------
                         :    1 : m_bb      : 4.061e-02
                         :    2 : m_wbb     : 3.765e-02
                         :    3 : m_wwbb    : 3.119e-02
                         :    4 : m_jj      : -1.589e-03
                         :    5 : m_jjj     : -2.901e-03
                         :    6 : m_lv      : -7.919e-03
                         :    7 : m_jlv     : -8.293e-03
                         : -------------------------------------
Fisher                   : Ranking result (top variable is best ranked)
                         : ---------------------------------
                         : Rank : Variable  : Discr. power
                         : ---------------------------------
                         :    1 : m_bb      : 1.279e-02
                         :    2 : m_wwbb    : 9.131e-03
                         :    3 : m_wbb     : 2.668e-03
                         :    4 : m_jlv     : 9.145e-04
                         :    5 : m_jjj     : 1.769e-04
                         :    6 : m_lv      : 6.617e-05
                         :    7 : m_jj      : 6.707e-06
                         : ---------------------------------
BDT                      : Ranking result (top variable is best ranked)
                         : ----------------------------------------
                         : Rank : Variable  : Variable Importance
                         : ----------------------------------------
                         :    1 : m_bb      : 2.089e-01
                         :    2 : m_wwbb    : 1.673e-01
                         :    3 : m_wbb     : 1.568e-01
                         :    4 : m_jlv     : 1.560e-01
                         :    5 : m_jjj     : 1.421e-01
                         :    6 : m_jj      : 1.052e-01
                         :    7 : m_lv      : 6.369e-02
                         : ----------------------------------------
                         : No variable ranking supplied by classifier: DNN_CPU
                         : No variable ranking supplied by classifier: PyKeras
TH1.Print Name  = TrainingHistory_DNN_CPU_trainingError, Entries= 0, Total sum= 16.589
TH1.Print Name  = TrainingHistory_DNN_CPU_valError, Entries= 0, Total sum= 17.6161
TH1.Print Name  = TrainingHistory_PyKeras_'accuracy', Entries= 0, Total sum= 13.34
TH1.Print Name  = TrainingHistory_PyKeras_'loss', Entries= 0, Total sum= 12.0916
TH1.Print Name  = TrainingHistory_PyKeras_'val_accuracy', Entries= 0, Total sum= 13.1739
TH1.Print Name  = TrainingHistory_PyKeras_'val_loss', Entries= 0, Total sum= 12.1987
Factory                  : === Destroy and recreate all methods via weight files for testing ===
                         : 
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
Factory                  : ␛[1mTest all methods␛[0m
Factory                  : Test method: Likelihood for Classification performance
                         : 
Likelihood               : [dataset] : Evaluation of Likelihood on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.0108 sec       
Factory                  : Test method: Fisher for Classification performance
                         : 
Fisher                   : [dataset] : Evaluation of Fisher on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.003 sec       
                         : Dataset[dataset] : Evaluation of Fisher on testing sample
Factory                  : Test method: BDT for Classification performance
                         : 
BDT                      : [dataset] : Evaluation of BDT on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.045 sec       
Factory                  : Test method: DNN_CPU for Classification performance
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 1000
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:   0.017919     1.0069   [    -3.3498     3.4247 ]
                         :    m_jjj:   0.020352     1.0044   [    -3.2831     3.3699 ]
                         :     m_lv:   0.016289    0.99263   [    -3.2339     3.3958 ]
                         :    m_jlv:  -0.018431    0.98242   [    -3.0632     5.7307 ]
                         :     m_bb:  0.0069564    0.98851   [    -2.9734     3.3513 ]
                         :    m_wbb:  -0.010633    0.99340   [    -3.2442     3.2244 ]
                         :   m_wwbb:  -0.012669    0.99259   [    -3.1871     5.7307 ]
                         : -----------------------------------------------------------
DNN_CPU                  : [dataset] : Evaluation of DNN_CPU on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.1 sec       
Factory                  : Test method: PyKeras for Classification performance
                         : 
                         : Setting up tf.keras
                         : Using TensorFlow version 2
                         : Use Keras version from TensorFlow : tf.keras
                         : Applying GPU option:  gpu_options.allow_growth=True
                         : Disabled TF eager execution when evaluating model 
                         :  Loading Keras Model 
                         : Loaded model from file: Higgs_trained_model.h5
PyKeras                  : [dataset] : Evaluation of PyKeras on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.174 sec       
Factory                  : ␛[1mEvaluate all methods␛[0m
Factory                  : Evaluate classifier: Likelihood
                         : 
Likelihood               : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_Likelihood     : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0447    0.66216   [    0.14661     10.222 ]
                         :    m_jjj:     1.0275    0.37015   [    0.34201     5.6016 ]
                         :     m_lv:     1.0500    0.15582   [    0.29757     2.8989 ]
                         :    m_jlv:     1.0053    0.39478   [    0.41660     5.8799 ]
                         :     m_bb:    0.97464    0.52138   [    0.10941     5.5163 ]
                         :    m_wbb:     1.0296    0.35719   [    0.38878     3.9747 ]
                         :   m_wwbb:    0.95617    0.30368   [    0.44118     4.0728 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: Fisher
                         : 
Fisher                   : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Also filling probability and rarity histograms (on request)...
TFHandler_Fisher         : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0447    0.66216   [    0.14661     10.222 ]
                         :    m_jjj:     1.0275    0.37015   [    0.34201     5.6016 ]
                         :     m_lv:     1.0500    0.15582   [    0.29757     2.8989 ]
                         :    m_jlv:     1.0053    0.39478   [    0.41660     5.8799 ]
                         :     m_bb:    0.97464    0.52138   [    0.10941     5.5163 ]
                         :    m_wbb:     1.0296    0.35719   [    0.38878     3.9747 ]
                         :   m_wwbb:    0.95617    0.30368   [    0.44118     4.0728 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: BDT
                         : 
BDT                      : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_BDT            : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0447    0.66216   [    0.14661     10.222 ]
                         :    m_jjj:     1.0275    0.37015   [    0.34201     5.6016 ]
                         :     m_lv:     1.0500    0.15582   [    0.29757     2.8989 ]
                         :    m_jlv:     1.0053    0.39478   [    0.41660     5.8799 ]
                         :     m_bb:    0.97464    0.52138   [    0.10941     5.5163 ]
                         :    m_wbb:     1.0296    0.35719   [    0.38878     3.9747 ]
                         :   m_wwbb:    0.95617    0.30368   [    0.44118     4.0728 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: DNN_CPU
                         : 
DNN_CPU                  : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 1000
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:  0.0043655    0.99836   [    -3.2801     5.7307 ]
                         :    m_jjj:  0.0044371    0.99827   [    -3.2805     5.7307 ]
                         :     m_lv:  0.0053380     1.0003   [    -3.2810     5.7307 ]
                         :    m_jlv:  0.0044637    0.99837   [    -3.2803     5.7307 ]
                         :     m_bb:  0.0043676    0.99847   [    -3.2797     5.7307 ]
                         :    m_wbb:  0.0042343    0.99744   [    -3.2803     5.7307 ]
                         :   m_wwbb:  0.0046014    0.99948   [    -3.2802     5.7307 ]
                         : -----------------------------------------------------------
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:   0.017919     1.0069   [    -3.3498     3.4247 ]
                         :    m_jjj:   0.020352     1.0044   [    -3.2831     3.3699 ]
                         :     m_lv:   0.016289    0.99263   [    -3.2339     3.3958 ]
                         :    m_jlv:  -0.018431    0.98242   [    -3.0632     5.7307 ]
                         :     m_bb:  0.0069564    0.98851   [    -2.9734     3.3513 ]
                         :    m_wbb:  -0.010633    0.99340   [    -3.2442     3.2244 ]
                         :   m_wwbb:  -0.012669    0.99259   [    -3.1871     5.7307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: PyKeras
                         : 
PyKeras                  : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_PyKeras        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0447    0.66216   [    0.14661     10.222 ]
                         :    m_jjj:     1.0275    0.37015   [    0.34201     5.6016 ]
                         :     m_lv:     1.0500    0.15582   [    0.29757     2.8989 ]
                         :    m_jlv:     1.0053    0.39478   [    0.41660     5.8799 ]
                         :     m_bb:    0.97464    0.52138   [    0.10941     5.5163 ]
                         :    m_wbb:     1.0296    0.35719   [    0.38878     3.9747 ]
                         :   m_wwbb:    0.95617    0.30368   [    0.44118     4.0728 ]
                         : -----------------------------------------------------------
                         : 
                         : Evaluation results ranked by best signal efficiency and purity (area)
                         : -------------------------------------------------------------------------------------------------------------------
                         : DataSet       MVA                       
                         : Name:         Method:          ROC-integ
                         : dataset       DNN_CPU        : 0.759
                         : dataset       BDT            : 0.754
                         : dataset       PyKeras        : 0.752
                         : dataset       Likelihood     : 0.699
                         : dataset       Fisher         : 0.642
                         : -------------------------------------------------------------------------------------------------------------------
                         : 
                         : Testing efficiency compared to training efficiency (overtraining check)
                         : -------------------------------------------------------------------------------------------------------------------
                         : DataSet              MVA              Signal efficiency: from test sample (from training sample) 
                         : Name:                Method:          @B=0.01             @B=0.10            @B=0.30   
                         : -------------------------------------------------------------------------------------------------------------------
                         : dataset              DNN_CPU        : 0.125 (0.145)       0.404 (0.452)      0.673 (0.714)
                         : dataset              BDT            : 0.098 (0.099)       0.393 (0.402)      0.657 (0.681)
                         : dataset              PyKeras        : 0.097 (0.111)       0.400 (0.414)      0.658 (0.659)
                         : dataset              Likelihood     : 0.070 (0.075)       0.356 (0.363)      0.581 (0.597)
                         : dataset              Fisher         : 0.015 (0.015)       0.121 (0.131)      0.487 (0.506)
                         : -------------------------------------------------------------------------------------------------------------------
                         : 
Dataset:dataset          : Created tree 'TestTree' with 6000 events
                         : 
Dataset:dataset          : Created tree 'TrainTree' with 14000 events
                         : 
Factory                  : ␛[1mThank you for using TMVA!␛[0m
                         : ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m