******************************************************************************
*Tree    :sig_tree  : tree                                                   *
*Entries :    10000 : Total =         1177229 bytes  File  Size =     785298 *
*        :          : Tree compression factor =   1.48                       *
******************************************************************************
*Br    0 :Type      : Type/F                                                 *
*Entries :    10000 : Total  Size=      40556 bytes  File Size  =        307 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression= 130.54     *
*............................................................................*
*Br    1 :lepton_pT : lepton_pT/F                                            *
*Entries :    10000 : Total  Size=      40581 bytes  File Size  =      30464 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.32     *
*............................................................................*
*Br    2 :lepton_eta : lepton_eta/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =      28650 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.40     *
*............................................................................*
*Br    3 :lepton_phi : lepton_phi/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =      30508 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.31     *
*............................................................................*
*Br    4 :missing_energy_magnitude : missing_energy_magnitude/F              *
*Entries :    10000 : Total  Size=      40656 bytes  File Size  =      35749 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.12     *
*............................................................................*
*Br    5 :missing_energy_phi : missing_energy_phi/F                          *
*Entries :    10000 : Total  Size=      40626 bytes  File Size  =      36766 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.09     *
*............................................................................*
*Br    6 :jet1_pt   : jet1_pt/F                                              *
*Entries :    10000 : Total  Size=      40571 bytes  File Size  =      32298 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.24     *
*............................................................................*
*Br    7 :jet1_eta  : jet1_eta/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      28467 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.41     *
*............................................................................*
*Br    8 :jet1_phi  : jet1_phi/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      30399 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.32     *
*............................................................................*
*Br    9 :jet1_b-tag : jet1_b-tag/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =       5087 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   7.88     *
*............................................................................*
*Br   10 :jet2_pt   : jet2_pt/F                                              *
*Entries :    10000 : Total  Size=      40571 bytes  File Size  =      31561 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.27     *
*............................................................................*
*Br   11 :jet2_eta  : jet2_eta/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      28616 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.40     *
*............................................................................*
*Br   12 :jet2_phi  : jet2_phi/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      30547 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.31     *
*............................................................................*
*Br   13 :jet2_b-tag : jet2_b-tag/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =       5031 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   7.97     *
*............................................................................*
*Br   14 :jet3_pt   : jet3_pt/F                                              *
*Entries :    10000 : Total  Size=      40571 bytes  File Size  =      30642 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.31     *
*............................................................................*
*Br   15 :jet3_eta  : jet3_eta/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      28955 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.38     *
*............................................................................*
*Br   16 :jet3_phi  : jet3_phi/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      30433 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.32     *
*............................................................................*
*Br   17 :jet3_b-tag : jet3_b-tag/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =       4879 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   8.22     *
*............................................................................*
*Br   18 :jet4_pt   : jet4_pt/F                                              *
*Entries :    10000 : Total  Size=      40571 bytes  File Size  =      29189 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.37     *
*............................................................................*
*Br   19 :jet4_eta  : jet4_eta/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      29311 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.37     *
*............................................................................*
*Br   20 :jet4_phi  : jet4_phi/F                                             *
*Entries :    10000 : Total  Size=      40576 bytes  File Size  =      30525 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.31     *
*............................................................................*
*Br   21 :jet4_b-tag : jet4_b-tag/F                                          *
*Entries :    10000 : Total  Size=      40586 bytes  File Size  =       4725 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   8.48     *
*............................................................................*
*Br   22 :m_jj      : m_jj/F                                                 *
*Entries :    10000 : Total  Size=      40556 bytes  File Size  =      34991 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.15     *
*............................................................................*
*Br   23 :m_jjj     : m_jjj/F                                                *
*Entries :    10000 : Total  Size=      40561 bytes  File Size  =      34460 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.16     *
*............................................................................*
*Br   24 :m_lv      : m_lv/F                                                 *
*Entries :    10000 : Total  Size=      40556 bytes  File Size  =      32232 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.24     *
*............................................................................*
*Br   25 :m_jlv     : m_jlv/F                                                *
*Entries :    10000 : Total  Size=      40561 bytes  File Size  =      34598 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.16     *
*............................................................................*
*Br   26 :m_bb      : m_bb/F                                                 *
*Entries :    10000 : Total  Size=      40556 bytes  File Size  =      35012 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.14     *
*............................................................................*
*Br   27 :m_wbb     : m_wbb/F                                                *
*Entries :    10000 : Total  Size=      40561 bytes  File Size  =      34493 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.16     *
*............................................................................*
*Br   28 :m_wwbb    : m_wwbb/F                                               *
*Entries :    10000 : Total  Size=      40566 bytes  File Size  =      34410 *
*Baskets :        1 : Basket Size=    1500672 bytes  Compression=   1.16     *
*............................................................................*
DataSetInfo              : [dataset] : Added class "Signal"
                         : Add Tree sig_tree of type Signal with 10000 events
DataSetInfo              : [dataset] : Added class "Background"
                         : Add Tree bkg_tree of type Background with 10000 events
Factory                  : Booking method: ␛[1mLikelihood␛[0m
                         : 
Factory                  : Booking method: ␛[1mFisher␛[0m
                         : 
Factory                  : Booking method: ␛[1mBDT␛[0m
                         : 
                         : Rebuilding Dataset dataset
                         : Building event vectors for type 2 Signal
                         : Dataset[dataset] :  create input formulas for tree sig_tree
                         : Building event vectors for type 2 Background
                         : Dataset[dataset] :  create input formulas for tree bkg_tree
DataSetFactory           : [dataset] : Number of events in input trees
                         : 
                         : 
                         : Number of training and testing events
                         : ---------------------------------------------------------------------------
                         : Signal     -- training events            : 7000
                         : Signal     -- testing events             : 3000
                         : Signal     -- training and testing events: 10000
                         : Background -- training events            : 7000
                         : Background -- testing events             : 3000
                         : Background -- training and testing events: 10000
                         : 
DataSetInfo              : Correlation matrix (Signal):
                         : ----------------------------------------------------------------
                         :             m_jj   m_jjj    m_lv   m_jlv    m_bb   m_wbb  m_wwbb
                         :    m_jj:  +1.000  +0.774  -0.004  +0.096  +0.024  +0.512  +0.533
                         :   m_jjj:  +0.774  +1.000  -0.010  +0.073  +0.152  +0.674  +0.668
                         :    m_lv:  -0.004  -0.010  +1.000  +0.121  -0.027  +0.009  +0.021
                         :   m_jlv:  +0.096  +0.073  +0.121  +1.000  +0.313  +0.544  +0.552
                         :    m_bb:  +0.024  +0.152  -0.027  +0.313  +1.000  +0.445  +0.333
                         :   m_wbb:  +0.512  +0.674  +0.009  +0.544  +0.445  +1.000  +0.915
                         :  m_wwbb:  +0.533  +0.668  +0.021  +0.552  +0.333  +0.915  +1.000
                         : ----------------------------------------------------------------
DataSetInfo              : Correlation matrix (Background):
                         : ----------------------------------------------------------------
                         :             m_jj   m_jjj    m_lv   m_jlv    m_bb   m_wbb  m_wwbb
                         :    m_jj:  +1.000  +0.808  +0.022  +0.150  +0.028  +0.407  +0.415
                         :   m_jjj:  +0.808  +1.000  +0.041  +0.206  +0.177  +0.569  +0.547
                         :    m_lv:  +0.022  +0.041  +1.000  +0.139  +0.037  +0.081  +0.085
                         :   m_jlv:  +0.150  +0.206  +0.139  +1.000  +0.309  +0.607  +0.557
                         :    m_bb:  +0.028  +0.177  +0.037  +0.309  +1.000  +0.625  +0.447
                         :   m_wbb:  +0.407  +0.569  +0.081  +0.607  +0.625  +1.000  +0.884
                         :  m_wwbb:  +0.415  +0.547  +0.085  +0.557  +0.447  +0.884  +1.000
                         : ----------------------------------------------------------------
DataSetFactory           : [dataset] :  
                         : 
Factory                  : Booking method: ␛[1mDNN_CPU␛[0m
                         : 
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
                         : The following options are set:
                         : - By User:
                         :     <none>
                         : - Default:
                         :     Boost_num: "0" [Number of times the classifier will be boosted]
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER:InputLayout=1|1|7:BatchLayout=1|128|7:Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
                         : The following options are set:
                         : - By User:
                         :     V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
                         :     VarTransform: "G" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
                         :     H: "False" [Print method-specific help message]
                         :     InputLayout: "1|1|7" [The Layout of the input]
                         :     BatchLayout: "1|128|7" [The Layout of the batch]
                         :     Layout: "DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR" [Layout of the network.]
                         :     ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
                         :     WeightInitialization: "XAVIER" [Weight initialization strategy]
                         :     Architecture: "CPU" [Which architecture to perform the training on.]
                         :     TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,ConvergenceSteps=10,BatchSize=128,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,ADAM_beta1=0.9,ADAM_beta2=0.999,ADAM_eps=1.E-7,DropConfig=0.0+0.0+0.0+0." [Defines the training strategies.]
                         : - Default:
                         :     VerbosityLevel: "Default" [Verbosity level]
                         :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
                         :     IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
                         :     RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
                         :     ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
DNN_CPU                  : [dataset] : Create Transformation "G" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'm_jj' <---> Output : variable 'm_jj'
                         : Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
                         : Input : variable 'm_lv' <---> Output : variable 'm_lv'
                         : Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
                         : Input : variable 'm_bb' <---> Output : variable 'm_bb'
                         : Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
                         : Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
                         : Will now use the CPU architecture with BLAS and IMT support !
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 64)                512       
                                                                 
 dense_1 (Dense)             (None, 64)                4160      
                                                                 
 dense_2 (Dense)             (None, 64)                4160      
                                                                 
 dense_3 (Dense)             (None, 64)                4160      
                                                                 
 dense_4 (Dense)             (None, 2)                 130       
                                                                 
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Factory                  : Booking method: ␛[1mPyKeras␛[0m
                         : 
                         : Setting up tf.keras
                         : Using TensorFlow version 2
                         : Use Keras version from TensorFlow : tf.keras
                         :  Loading Keras Model 
                         : Loaded model from file: model_higgs.h5
Factory                  : ␛[1mTrain all methods␛[0m
Factory                  : [dataset] : Create Transformation "I" with events from all classes.
                         : 
                         : Transformation, Variable selection : 
                         : Input : variable 'm_jj' <---> Output : variable 'm_jj'
                         : Input : variable 'm_jjj' <---> Output : variable 'm_jjj'
                         : Input : variable 'm_lv' <---> Output : variable 'm_lv'
                         : Input : variable 'm_jlv' <---> Output : variable 'm_jlv'
                         : Input : variable 'm_bb' <---> Output : variable 'm_bb'
                         : Input : variable 'm_wbb' <---> Output : variable 'm_wbb'
                         : Input : variable 'm_wwbb' <---> Output : variable 'm_wwbb'
TFHandler_Factory        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0318    0.65629   [    0.15106     16.132 ]
                         :    m_jjj:     1.0217    0.37420   [    0.34247     8.9401 ]
                         :     m_lv:     1.0507    0.16678   [    0.26679     3.6823 ]
                         :    m_jlv:     1.0161    0.40288   [    0.38441     6.5831 ]
                         :     m_bb:    0.97707    0.53961   [   0.080986     8.2551 ]
                         :    m_wbb:     1.0358    0.36856   [    0.38503     6.4013 ]
                         :   m_wwbb:    0.96265    0.31608   [    0.43228     4.5350 ]
                         : -----------------------------------------------------------
                         : Ranking input variables (method unspecific)...
IdTransformation         : Ranking result (top variable is best ranked)
                         : -------------------------------
                         : Rank : Variable  : Separation
                         : -------------------------------
                         :    1 : m_bb      : 9.511e-02
                         :    2 : m_wbb     : 4.268e-02
                         :    3 : m_wwbb    : 4.178e-02
                         :    4 : m_jjj     : 2.825e-02
                         :    5 : m_jlv     : 1.999e-02
                         :    6 : m_jj      : 3.834e-03
                         :    7 : m_lv      : 3.699e-03
                         : -------------------------------
Factory                  : Train method: Likelihood for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ Likelihood ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : The maximum-likelihood classifier models the data with probability 
                         : density functions (PDF) reproducing the signal and background
                         : distributions of the input variables. Correlations among the 
                         : variables are ignored.
                         : 
                         : ␛[1m--- Performance optimisation:␛[0m
                         : 
                         : Required for good performance are decorrelated input variables
                         : (PCA transformation via the option "VarTransform=Decorrelate"
                         : may be tried). Irreducible non-linear correlations may be reduced
                         : by precombining strongly correlated input variables, or by simply
                         : removing one of the variables.
                         : 
                         : ␛[1m--- Performance tuning via configuration options:␛[0m
                         : 
                         : High fidelity PDF estimates are mandatory, i.e., sufficient training 
                         : statistics is required to populate the tails of the distributions
                         : It would be a surprise if the default Spline or KDE kernel parameters
                         : provide a satisfying fit to the data. The user is advised to properly
                         : tune the events per bin and smooth options in the spline cases
                         : individually per variable. If the KDE kernel is used, the adaptive
                         : Gaussian kernel may lead to artefacts, so please always also try
                         : the non-adaptive one.
                         : 
                         : All tuning parameters must be adjusted individually for each input
                         : variable!
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
                         : Filling reference histograms
                         : Building PDF out of reference histograms
                         : Elapsed time for training with 14000 events: 0.12 sec         
Likelihood               : [dataset] : Evaluation of Likelihood on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.0215 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.class.C␛[0m
                         : Higgs_ClassificationOutput.root:/dataset/Method_Likelihood/Likelihood
Factory                  : Training finished
                         : 
Factory                  : Train method: Fisher for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ Fisher ] :␛[0m
                         : 
                         : ␛[1m--- Short description:␛[0m
                         : 
                         : Fisher discriminants select events by distinguishing the mean 
                         : values of the signal and background distributions in a trans- 
                         : formed variable space where linear correlations are removed.
                         : 
                         :    (More precisely: the "linear discriminator" determines
                         :     an axis in the (correlated) hyperspace of the input 
                         :     variables such that, when projecting the output classes 
                         :     (signal and background) upon this axis, they are pushed 
                         :     as far as possible away from each other, while events
                         :     of a same class are confined in a close vicinity. The  
                         :     linearity property of this classifier is reflected in the 
                         :     metric with which "far apart" and "close vicinity" are 
                         :     determined: the covariance matrix of the discriminating
                         :     variable space.)
                         : 
                         : ␛[1m--- Performance optimisation:␛[0m
                         : 
                         : Optimal performance for Fisher discriminants is obtained for 
                         : linearly correlated Gaussian-distributed variables. Any deviation
                         : from this ideal reduces the achievable separation power. In 
                         : particular, no discrimination at all is achieved for a variable
                         : that has the same sample mean for signal and background, even if 
                         : the shapes of the distributions are very different. Thus, Fisher 
                         : discriminants often benefit from suitable transformations of the 
                         : input variables. For example, if a variable x in [-1,1] has a 
                         : a parabolic signal distributions, and a uniform background
                         : distributions, their mean value is zero in both cases, leading 
                         : to no separation. The simple transformation x -> |x| renders this 
                         : variable powerful for the use in a Fisher discriminant.
                         : 
                         : ␛[1m--- Performance tuning via configuration options:␛[0m
                         : 
                         : <None>
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
Fisher                   : Results for Fisher coefficients:
                         : -----------------------
                         : Variable:  Coefficient:
                         : -----------------------
                         :     m_jj:       -0.051
                         :    m_jjj:       +0.192
                         :     m_lv:       +0.045
                         :    m_jlv:       +0.059
                         :     m_bb:       -0.211
                         :    m_wbb:       +0.549
                         :   m_wwbb:       -0.778
                         : (offset):       +0.136
                         : -----------------------
                         : Elapsed time for training with 14000 events: 0.0122 sec         
Fisher                   : [dataset] : Evaluation of Fisher on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.00437 sec       
                         : <CreateMVAPdfs> Separation from histogram (PDF): 0.090 (0.000)
                         : Dataset[dataset] : Evaluation of Fisher on training sample
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: BDT for Classification
                         : 
BDT                      : #events: (reweighted) sig: 7000 bkg: 7000
                         : #events: (unweighted) sig: 7000 bkg: 7000
                         : Training 200 Decision Trees ... patience please
                         : Elapsed time for training with 14000 events: 0.687 sec         
BDT                      : [dataset] : Evaluation of BDT on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.115 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.class.C␛[0m
                         : Higgs_ClassificationOutput.root:/dataset/Method_BDT/BDT
Factory                  : Training finished
                         : 
Factory                  : Train method: DNN_CPU for Classification
                         : 
                         : Preparing the Gaussian transformation...
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:  0.0043655    0.99836   [    -3.2801     5.7307 ]
                         :    m_jjj:  0.0044371    0.99827   [    -3.2805     5.7307 ]
                         :     m_lv:  0.0053380     1.0003   [    -3.2810     5.7307 ]
                         :    m_jlv:  0.0044637    0.99837   [    -3.2803     5.7307 ]
                         :     m_bb:  0.0043676    0.99847   [    -3.2797     5.7307 ]
                         :    m_wbb:  0.0042343    0.99744   [    -3.2803     5.7307 ]
                         :   m_wwbb:  0.0046014    0.99948   [    -3.2802     5.7307 ]
                         : -----------------------------------------------------------
                         : Start of deep neural network training on CPU using MT,  nthreads = 1
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:  0.0043655    0.99836   [    -3.2801     5.7307 ]
                         :    m_jjj:  0.0044371    0.99827   [    -3.2805     5.7307 ]
                         :     m_lv:  0.0053380     1.0003   [    -3.2810     5.7307 ]
                         :    m_jlv:  0.0044637    0.99837   [    -3.2803     5.7307 ]
                         :     m_bb:  0.0043676    0.99847   [    -3.2797     5.7307 ]
                         :    m_wbb:  0.0042343    0.99744   [    -3.2803     5.7307 ]
                         :   m_wwbb:  0.0046014    0.99948   [    -3.2802     5.7307 ]
                         : -----------------------------------------------------------
                         : *****   Deep Learning Network *****
DEEP NEURAL NETWORK:   Depth = 5  Input = ( 1, 1, 7 )  Batch size = 128  Loss function = C
   Layer 0   DENSE Layer:   ( Input =     7 , Width =    64 )  Output = (  1 ,   128 ,    64 )   Activation Function = Tanh
   Layer 1   DENSE Layer:   ( Input =    64 , Width =    64 )  Output = (  1 ,   128 ,    64 )   Activation Function = Tanh
   Layer 2   DENSE Layer:   ( Input =    64 , Width =    64 )  Output = (  1 ,   128 ,    64 )   Activation Function = Tanh
   Layer 3   DENSE Layer:   ( Input =    64 , Width =    64 )  Output = (  1 ,   128 ,    64 )   Activation Function = Tanh
   Layer 4   DENSE Layer:   ( Input =    64 , Width =     1 )  Output = (  1 ,   128 ,     1 )   Activation Function = Identity
                         : Using 11200 events for training and 2800 for testing
                         : Compute initial loss  on the validation data 
                         : Training phase 1 of 1:  Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.896085
                         : --------------------------------------------------------------
                         :      Epoch |   Train Err.   Val. Err.  t(s)/epoch   t(s)/Loss   nEvents/s Conv. Steps
                         : --------------------------------------------------------------
                         :    Start epoch iteration ...
                         :          1 Minimum Test error found - save the configuration 
                         :          1 |     0.652761    0.614589    0.594811   0.0472326     20336.8           0
                         :          2 Minimum Test error found - save the configuration 
                         :          2 |     0.597276     0.59139    0.591981   0.0471332     20438.7           0
                         :          3 Minimum Test error found - save the configuration 
                         :          3 |     0.581473    0.586432    0.592466   0.0472033     20423.2           0
                         :          4 Minimum Test error found - save the configuration 
                         :          4 |     0.571653    0.582758    0.594439   0.0473048     20353.3           0
                         :          5 |     0.568974    0.586554    0.592709   0.0471157     20410.8           1
                         :          6 |     0.567846     0.58684    0.592473   0.0471427     20420.7           2
                         :          7 |     0.563311    0.584987    0.593733   0.0472087     20376.1           3
                         :          8 |     0.557842    0.583297    0.594137   0.0472522     20362.6           4
                         :          9 |     0.558408    0.583813    0.596561    0.048538     20320.3           5
                         :         10 Minimum Test error found - save the configuration 
                         :         10 |     0.554451    0.578108    0.594736   0.0475148     20350.1           0
                         :         11 |     0.554617    0.579852    0.595549   0.0473905     20315.3           1
                         :         12 |     0.552566    0.581455    0.595653   0.0473829     20311.2           2
                         :         13 |     0.549306    0.585172    0.596106   0.0473874     20294.6           3
                         :         14 |     0.548546    0.584994    0.598597   0.0474048     20203.5           4
                         :         15 |     0.546933    0.587134    0.596192   0.0474823     20294.9           5
                         :         16 |     0.545405    0.578761     0.59532   0.0474842     20327.3           6
                         :         17 |     0.545771    0.586449    0.595697   0.0474358     20311.5           7
                         :         18 |     0.541995    0.583504    0.597071   0.0475056     20263.3           8
                         :         19 |     0.540855    0.588868     0.59664   0.0474981     20278.9           9
                         :         20 |      0.53864      0.5891    0.597112   0.0475503     20263.4          10
                         : 
                         : Elapsed time for training with 14000 events: 12 sec         
                         : Evaluate deep neural network on CPU using batches with size = 128
                         : 
DNN_CPU                  : [dataset] : Evaluation of DNN_CPU on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.249 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: PyKeras for Classification
                         : 
                         : 
                         : ␛[1m================================================================␛[0m
                         : ␛[1mH e l p   f o r   M V A   m e t h o d   [ PyKeras ] :␛[0m
                         : 
                         : Keras is a high-level API for the Theano and Tensorflow packages.
                         : This method wraps the training and predictions steps of the Keras
                         : Python package for TMVA, so that dataloading, preprocessing and
                         : evaluation can be done within the TMVA system. To use this Keras
                         : interface, you have to generate a model with Keras first. Then,
                         : this model can be loaded and trained in TMVA.
                         : 
                         : 
                         : <Suppress this message by specifying "!H" in the booking option>
                         : ␛[1m================================================================␛[0m
                         : 
                         : Split TMVA training data in 11200 training events and 2800 validation events
                         : Training Model Summary
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 64)                512       
                                                                 
 dense_1 (Dense)             (None, 64)                4160      
                                                                 
 dense_2 (Dense)             (None, 64)                4160      
                                                                 
 dense_3 (Dense)             (None, 64)                4160      
                                                                 
 dense_4 (Dense)             (None, 2)                 130       
                                                                 
=================================================================
Total params: 13122 (51.26 KB)
Trainable params: 13122 (51.26 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
                         : Option SaveBestOnly: Only model weights with smallest validation loss will be stored
Epoch 1/20
 
  1/112 [..............................] - ETA: 1:12 - loss: 0.6981 - accuracy: 0.5200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 21/112 [====>.........................] - ETA: 0s - loss: 0.6902 - accuracy: 0.5433  ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 43/112 [==========>...................] - ETA: 0s - loss: 0.6844 - accuracy: 0.5516␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 66/112 [================>.............] - ETA: 0s - loss: 0.6782 - accuracy: 0.5694␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 92/112 [=======================>......] - ETA: 0s - loss: 0.6692 - accuracy: 0.5838
Epoch 1: val_loss improved from inf to 0.64369, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 1s 5ms/step - loss: 0.6652 - accuracy: 0.5913 - val_loss: 0.6437 - val_accuracy: 0.6304
Epoch 2/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6461 - accuracy: 0.6200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 25/112 [=====>........................] - ETA: 0s - loss: 0.6496 - accuracy: 0.6224␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 45/112 [===========>..................] - ETA: 0s - loss: 0.6383 - accuracy: 0.6304␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 67/112 [================>.............] - ETA: 0s - loss: 0.6371 - accuracy: 0.6324␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 93/112 [=======================>......] - ETA: 0s - loss: 0.6338 - accuracy: 0.6383
Epoch 2: val_loss improved from 0.64369 to 0.62406, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6335 - accuracy: 0.6405 - val_loss: 0.6241 - val_accuracy: 0.6475
Epoch 3/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6035 - accuracy: 0.6600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 27/112 [======>.......................] - ETA: 0s - loss: 0.6181 - accuracy: 0.6544␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 53/112 [=============>................] - ETA: 0s - loss: 0.6154 - accuracy: 0.6585␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 76/112 [===================>..........] - ETA: 0s - loss: 0.6135 - accuracy: 0.6593␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
100/112 [=========================>....] - ETA: 0s - loss: 0.6161 - accuracy: 0.6577
Epoch 3: val_loss improved from 0.62406 to 0.60965, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6162 - accuracy: 0.6581 - val_loss: 0.6097 - val_accuracy: 0.6575
Epoch 4/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6590 - accuracy: 0.5900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 27/112 [======>.......................] - ETA: 0s - loss: 0.6093 - accuracy: 0.6681␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 53/112 [=============>................] - ETA: 0s - loss: 0.6021 - accuracy: 0.6742␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 77/112 [===================>..........] - ETA: 0s - loss: 0.6062 - accuracy: 0.6694␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
103/112 [==========================>...] - ETA: 0s - loss: 0.6054 - accuracy: 0.6692
Epoch 4: val_loss improved from 0.60965 to 0.60824, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6055 - accuracy: 0.6700 - val_loss: 0.6082 - val_accuracy: 0.6607
Epoch 5/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5918 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 29/112 [======>.......................] - ETA: 0s - loss: 0.6012 - accuracy: 0.6693␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 51/112 [============>.................] - ETA: 0s - loss: 0.6084 - accuracy: 0.6598␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 71/112 [==================>...........] - ETA: 0s - loss: 0.6050 - accuracy: 0.6645␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 92/112 [=======================>......] - ETA: 0s - loss: 0.6037 - accuracy: 0.6677␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
110/112 [============================>.] - ETA: 0s - loss: 0.6047 - accuracy: 0.6673
Epoch 5: val_loss did not improve from 0.60824
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.6052 - accuracy: 0.6669 - val_loss: 0.6279 - val_accuracy: 0.6479
Epoch 6/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6603 - accuracy: 0.6400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 23/112 [=====>........................] - ETA: 0s - loss: 0.6091 - accuracy: 0.6657␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 41/112 [=========>....................] - ETA: 0s - loss: 0.6026 - accuracy: 0.6724␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 61/112 [===============>..............] - ETA: 0s - loss: 0.6016 - accuracy: 0.6733␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 84/112 [=====================>........] - ETA: 0s - loss: 0.5999 - accuracy: 0.6731␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
109/112 [============================>.] - ETA: 0s - loss: 0.5988 - accuracy: 0.6725
Epoch 6: val_loss improved from 0.60824 to 0.59808, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5991 - accuracy: 0.6723 - val_loss: 0.5981 - val_accuracy: 0.6714
Epoch 7/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6036 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 26/112 [=====>........................] - ETA: 0s - loss: 0.6002 - accuracy: 0.6673␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 52/112 [============>.................] - ETA: 0s - loss: 0.5947 - accuracy: 0.6763␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 78/112 [===================>..........] - ETA: 0s - loss: 0.5954 - accuracy: 0.6738␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
105/112 [===========================>..] - ETA: 0s - loss: 0.5945 - accuracy: 0.6766
Epoch 7: val_loss improved from 0.59808 to 0.59671, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5947 - accuracy: 0.6762 - val_loss: 0.5967 - val_accuracy: 0.6718
Epoch 8/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5876 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 27/112 [======>.......................] - ETA: 0s - loss: 0.5880 - accuracy: 0.6878␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 51/112 [============>.................] - ETA: 0s - loss: 0.5914 - accuracy: 0.6804␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 73/112 [==================>...........] - ETA: 0s - loss: 0.5919 - accuracy: 0.6796␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 93/112 [=======================>......] - ETA: 0s - loss: 0.5924 - accuracy: 0.6799
Epoch 8: val_loss did not improve from 0.59671
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5928 - accuracy: 0.6780 - val_loss: 0.6051 - val_accuracy: 0.6689
Epoch 9/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5875 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 25/112 [=====>........................] - ETA: 0s - loss: 0.5937 - accuracy: 0.6760␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 43/112 [==========>...................] - ETA: 0s - loss: 0.5972 - accuracy: 0.6702␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 65/112 [================>.............] - ETA: 0s - loss: 0.5925 - accuracy: 0.6769␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 86/112 [======================>.......] - ETA: 0s - loss: 0.5909 - accuracy: 0.6771␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
109/112 [============================>.] - ETA: 0s - loss: 0.5886 - accuracy: 0.6801
Epoch 9: val_loss did not improve from 0.59671
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5885 - accuracy: 0.6804 - val_loss: 0.5975 - val_accuracy: 0.6796
Epoch 10/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5894 - accuracy: 0.7100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 25/112 [=====>........................] - ETA: 0s - loss: 0.6016 - accuracy: 0.6700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 48/112 [===========>..................] - ETA: 0s - loss: 0.5969 - accuracy: 0.6754␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 73/112 [==================>...........] - ETA: 0s - loss: 0.5942 - accuracy: 0.6764␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 98/112 [=========================>....] - ETA: 0s - loss: 0.5891 - accuracy: 0.6783
Epoch 10: val_loss did not improve from 0.59671
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5902 - accuracy: 0.6791 - val_loss: 0.6016 - val_accuracy: 0.6614
Epoch 11/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5941 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 23/112 [=====>........................] - ETA: 0s - loss: 0.5889 - accuracy: 0.6787␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 48/112 [===========>..................] - ETA: 0s - loss: 0.5947 - accuracy: 0.6758␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 72/112 [==================>...........] - ETA: 0s - loss: 0.5927 - accuracy: 0.6757␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 95/112 [========================>.....] - ETA: 0s - loss: 0.5887 - accuracy: 0.6769
Epoch 11: val_loss did not improve from 0.59671
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5852 - accuracy: 0.6823 - val_loss: 0.6123 - val_accuracy: 0.6586
Epoch 12/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6869 - accuracy: 0.5800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 31/112 [=======>......................] - ETA: 0s - loss: 0.5857 - accuracy: 0.6774␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 59/112 [==============>...............] - ETA: 0s - loss: 0.5840 - accuracy: 0.6807␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 88/112 [======================>.......] - ETA: 0s - loss: 0.5883 - accuracy: 0.6760
Epoch 12: val_loss did not improve from 0.59671
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5877 - accuracy: 0.6787 - val_loss: 0.5999 - val_accuracy: 0.6711
Epoch 13/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5934 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 30/112 [=======>......................] - ETA: 0s - loss: 0.5871 - accuracy: 0.6777␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 56/112 [==============>...............] - ETA: 0s - loss: 0.5908 - accuracy: 0.6746␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 85/112 [=====================>........] - ETA: 0s - loss: 0.5841 - accuracy: 0.6825
Epoch 13: val_loss improved from 0.59671 to 0.59507, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5818 - accuracy: 0.6829 - val_loss: 0.5951 - val_accuracy: 0.6768
Epoch 14/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5558 - accuracy: 0.7200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 28/112 [======>.......................] - ETA: 0s - loss: 0.5630 - accuracy: 0.6993␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 55/112 [=============>................] - ETA: 0s - loss: 0.5736 - accuracy: 0.6911␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 82/112 [====================>.........] - ETA: 0s - loss: 0.5785 - accuracy: 0.6863␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
107/112 [===========================>..] - ETA: 0s - loss: 0.5792 - accuracy: 0.6845
Epoch 14: val_loss improved from 0.59507 to 0.59141, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5796 - accuracy: 0.6838 - val_loss: 0.5914 - val_accuracy: 0.6764
Epoch 15/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.6286 - accuracy: 0.6500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 27/112 [======>.......................] - ETA: 0s - loss: 0.5780 - accuracy: 0.6870␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 54/112 [=============>................] - ETA: 0s - loss: 0.5722 - accuracy: 0.6906␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 79/112 [====================>.........] - ETA: 0s - loss: 0.5710 - accuracy: 0.6924␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
102/112 [==========================>...] - ETA: 0s - loss: 0.5741 - accuracy: 0.6906
Epoch 15: val_loss improved from 0.59141 to 0.58574, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5748 - accuracy: 0.6892 - val_loss: 0.5857 - val_accuracy: 0.6832
Epoch 16/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5553 - accuracy: 0.7400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 28/112 [======>.......................] - ETA: 0s - loss: 0.5851 - accuracy: 0.6811␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 55/112 [=============>................] - ETA: 0s - loss: 0.5829 - accuracy: 0.6840␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 83/112 [=====================>........] - ETA: 0s - loss: 0.5780 - accuracy: 0.6871␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
108/112 [===========================>..] - ETA: 0s - loss: 0.5771 - accuracy: 0.6869
Epoch 16: val_loss did not improve from 0.58574
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5774 - accuracy: 0.6863 - val_loss: 0.5901 - val_accuracy: 0.6775
Epoch 17/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.4811 - accuracy: 0.7500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 29/112 [======>.......................] - ETA: 0s - loss: 0.5763 - accuracy: 0.6869␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 57/112 [==============>...............] - ETA: 0s - loss: 0.5763 - accuracy: 0.6868␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 82/112 [====================>.........] - ETA: 0s - loss: 0.5757 - accuracy: 0.6856␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
110/112 [============================>.] - ETA: 0s - loss: 0.5745 - accuracy: 0.6872
Epoch 17: val_loss improved from 0.58574 to 0.58572, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5746 - accuracy: 0.6877 - val_loss: 0.5857 - val_accuracy: 0.6807
Epoch 18/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.4714 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 29/112 [======>.......................] - ETA: 0s - loss: 0.5817 - accuracy: 0.6945␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 56/112 [==============>...............] - ETA: 0s - loss: 0.5784 - accuracy: 0.6955␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 83/112 [=====================>........] - ETA: 0s - loss: 0.5772 - accuracy: 0.6911␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
109/112 [============================>.] - ETA: 0s - loss: 0.5744 - accuracy: 0.6902
Epoch 18: val_loss did not improve from 0.58572
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5742 - accuracy: 0.6899 - val_loss: 0.5883 - val_accuracy: 0.6814
Epoch 19/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5466 - accuracy: 0.7200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 28/112 [======>.......................] - ETA: 0s - loss: 0.5736 - accuracy: 0.6868␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 56/112 [==============>...............] - ETA: 0s - loss: 0.5735 - accuracy: 0.6879␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 84/112 [=====================>........] - ETA: 0s - loss: 0.5741 - accuracy: 0.6873␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - ETA: 0s - loss: 0.5750 - accuracy: 0.6873
Epoch 19: val_loss improved from 0.58572 to 0.58181, saving model to trained_model_higgs.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 3ms/step - loss: 0.5750 - accuracy: 0.6873 - val_loss: 0.5818 - val_accuracy: 0.6836
Epoch 20/20
 
  1/112 [..............................] - ETA: 0s - loss: 0.5926 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 28/112 [======>.......................] - ETA: 0s - loss: 0.5778 - accuracy: 0.6850␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 57/112 [==============>...............] - ETA: 0s - loss: 0.5793 - accuracy: 0.6842␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 86/112 [======================>.......] - ETA: 0s - loss: 0.5759 - accuracy: 0.6879
Epoch 20: val_loss did not improve from 0.58181
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
112/112 [==============================] - 0s 2ms/step - loss: 0.5746 - accuracy: 0.6876 - val_loss: 0.5896 - val_accuracy: 0.6807
                         : Getting training history for item:0 name = 'loss'
                         : Getting training history for item:1 name = 'accuracy'
                         : Getting training history for item:2 name = 'val_loss'
                         : Getting training history for item:3 name = 'val_accuracy'
                         : Elapsed time for training with 14000 events: 7.26 sec         
                         : Setting up tf.keras
                         : Using TensorFlow version 2
                         : Use Keras version from TensorFlow : tf.keras
                         : Disabled TF eager execution when evaluating model 
                         :  Loading Keras Model 
                         : Loaded model from file: trained_model_higgs.h5
PyKeras                  : [dataset] : Evaluation of PyKeras on training sample (14000 events)
                         : Elapsed time for evaluation of 14000 events: 0.33 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.class.C␛[0m
Factory                  : Training finished
                         : 
                         : Ranking input variables (method specific)...
Likelihood               : Ranking result (top variable is best ranked)
                         : -------------------------------------
                         : Rank : Variable  : Delta Separation
                         : -------------------------------------
                         :    1 : m_bb      : 4.061e-02
                         :    2 : m_wbb     : 3.765e-02
                         :    3 : m_wwbb    : 3.119e-02
                         :    4 : m_jj      : -1.589e-03
                         :    5 : m_jjj     : -2.901e-03
                         :    6 : m_lv      : -7.919e-03
                         :    7 : m_jlv     : -8.293e-03
                         : -------------------------------------
Fisher                   : Ranking result (top variable is best ranked)
                         : ---------------------------------
                         : Rank : Variable  : Discr. power
                         : ---------------------------------
                         :    1 : m_bb      : 1.279e-02
                         :    2 : m_wwbb    : 9.131e-03
                         :    3 : m_wbb     : 2.668e-03
                         :    4 : m_jlv     : 9.145e-04
                         :    5 : m_jjj     : 1.769e-04
                         :    6 : m_lv      : 6.617e-05
                         :    7 : m_jj      : 6.707e-06
                         : ---------------------------------
BDT                      : Ranking result (top variable is best ranked)
                         : ----------------------------------------
                         : Rank : Variable  : Variable Importance
                         : ----------------------------------------
                         :    1 : m_bb      : 2.089e-01
                         :    2 : m_wwbb    : 1.673e-01
                         :    3 : m_wbb     : 1.568e-01
                         :    4 : m_jlv     : 1.560e-01
                         :    5 : m_jjj     : 1.421e-01
                         :    6 : m_jj      : 1.052e-01
                         :    7 : m_lv      : 6.369e-02
                         : ----------------------------------------
                         : No variable ranking supplied by classifier: DNN_CPU
                         : No variable ranking supplied by classifier: PyKeras
TH1.Print Name  = TrainingHistory_DNN_CPU_trainingError, Entries= 0, Total sum= 11.2386
TH1.Print Name  = TrainingHistory_DNN_CPU_valError, Entries= 0, Total sum= 11.7241
TH1.Print Name  = TrainingHistory_PyKeras_'accuracy', Entries= 0, Total sum= 13.4687
TH1.Print Name  = TrainingHistory_PyKeras_'loss', Entries= 0, Total sum= 11.8757
TH1.Print Name  = TrainingHistory_PyKeras_'val_accuracy', Entries= 0, Total sum= 13.3671
TH1.Print Name  = TrainingHistory_PyKeras_'val_loss', Entries= 0, Total sum= 12.0325
Factory                  : === Destroy and recreate all methods via weight files for testing ===
                         : 
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Likelihood.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_Fisher.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_BDT.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_DNN_CPU.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVA_Higgs_Classification_PyKeras.weights.xml␛[0m
Factory                  : ␛[1mTest all methods␛[0m
Factory                  : Test method: Likelihood for Classification performance
                         : 
Likelihood               : [dataset] : Evaluation of Likelihood on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.0106 sec       
Factory                  : Test method: Fisher for Classification performance
                         : 
Fisher                   : [dataset] : Evaluation of Fisher on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.00286 sec       
                         : Dataset[dataset] : Evaluation of Fisher on testing sample
Factory                  : Test method: BDT for Classification performance
                         : 
BDT                      : [dataset] : Evaluation of BDT on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.0515 sec       
Factory                  : Test method: DNN_CPU for Classification performance
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 1000
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:   0.017919     1.0069   [    -3.3498     3.4247 ]
                         :    m_jjj:   0.020352     1.0044   [    -3.2831     3.3699 ]
                         :     m_lv:   0.016289    0.99263   [    -3.2339     3.3958 ]
                         :    m_jlv:  -0.018431    0.98242   [    -3.0632     5.7307 ]
                         :     m_bb:  0.0069564    0.98851   [    -2.9734     3.3513 ]
                         :    m_wbb:  -0.010633    0.99340   [    -3.2442     3.2244 ]
                         :   m_wwbb:  -0.012669    0.99259   [    -3.1871     5.7307 ]
                         : -----------------------------------------------------------
DNN_CPU                  : [dataset] : Evaluation of DNN_CPU on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.0998 sec       
Factory                  : Test method: PyKeras for Classification performance
                         : 
                         : Setting up tf.keras
                         : Using TensorFlow version 2
                         : Use Keras version from TensorFlow : tf.keras
                         : Disabled TF eager execution when evaluating model 
                         :  Loading Keras Model 
                         : Loaded model from file: trained_model_higgs.h5
PyKeras                  : [dataset] : Evaluation of PyKeras on testing sample (6000 events)
                         : Elapsed time for evaluation of 6000 events: 0.188 sec       
Factory                  : ␛[1mEvaluate all methods␛[0m
Factory                  : Evaluate classifier: Likelihood
                         : 
Likelihood               : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_Likelihood     : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0447    0.66216   [    0.14661     10.222 ]
                         :    m_jjj:     1.0275    0.37015   [    0.34201     5.6016 ]
                         :     m_lv:     1.0500    0.15582   [    0.29757     2.8989 ]
                         :    m_jlv:     1.0053    0.39478   [    0.41660     5.8799 ]
                         :     m_bb:    0.97464    0.52138   [    0.10941     5.5163 ]
                         :    m_wbb:     1.0296    0.35719   [    0.38878     3.9747 ]
                         :   m_wwbb:    0.95617    0.30368   [    0.44118     4.0728 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: Fisher
                         : 
Fisher                   : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Also filling probability and rarity histograms (on request)...
TFHandler_Fisher         : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0447    0.66216   [    0.14661     10.222 ]
                         :    m_jjj:     1.0275    0.37015   [    0.34201     5.6016 ]
                         :     m_lv:     1.0500    0.15582   [    0.29757     2.8989 ]
                         :    m_jlv:     1.0053    0.39478   [    0.41660     5.8799 ]
                         :     m_bb:    0.97464    0.52138   [    0.10941     5.5163 ]
                         :    m_wbb:     1.0296    0.35719   [    0.38878     3.9747 ]
                         :   m_wwbb:    0.95617    0.30368   [    0.44118     4.0728 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: BDT
                         : 
BDT                      : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_BDT            : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0447    0.66216   [    0.14661     10.222 ]
                         :    m_jjj:     1.0275    0.37015   [    0.34201     5.6016 ]
                         :     m_lv:     1.0500    0.15582   [    0.29757     2.8989 ]
                         :    m_jlv:     1.0053    0.39478   [    0.41660     5.8799 ]
                         :     m_bb:    0.97464    0.52138   [    0.10941     5.5163 ]
                         :    m_wbb:     1.0296    0.35719   [    0.38878     3.9747 ]
                         :   m_wwbb:    0.95617    0.30368   [    0.44118     4.0728 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: DNN_CPU
                         : 
DNN_CPU                  : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 1000
                         : 
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:  0.0043655    0.99836   [    -3.2801     5.7307 ]
                         :    m_jjj:  0.0044371    0.99827   [    -3.2805     5.7307 ]
                         :     m_lv:  0.0053380     1.0003   [    -3.2810     5.7307 ]
                         :    m_jlv:  0.0044637    0.99837   [    -3.2803     5.7307 ]
                         :     m_bb:  0.0043676    0.99847   [    -3.2797     5.7307 ]
                         :    m_wbb:  0.0042343    0.99744   [    -3.2803     5.7307 ]
                         :   m_wwbb:  0.0046014    0.99948   [    -3.2802     5.7307 ]
                         : -----------------------------------------------------------
TFHandler_DNN_CPU        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:   0.017919     1.0069   [    -3.3498     3.4247 ]
                         :    m_jjj:   0.020352     1.0044   [    -3.2831     3.3699 ]
                         :     m_lv:   0.016289    0.99263   [    -3.2339     3.3958 ]
                         :    m_jlv:  -0.018431    0.98242   [    -3.0632     5.7307 ]
                         :     m_bb:  0.0069564    0.98851   [    -2.9734     3.3513 ]
                         :    m_wbb:  -0.010633    0.99340   [    -3.2442     3.2244 ]
                         :   m_wwbb:  -0.012669    0.99259   [    -3.1871     5.7307 ]
                         : -----------------------------------------------------------
Factory                  : Evaluate classifier: PyKeras
                         : 
PyKeras                  : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
TFHandler_PyKeras        : Variable        Mean        RMS   [        Min        Max ]
                         : -----------------------------------------------------------
                         :     m_jj:     1.0447    0.66216   [    0.14661     10.222 ]
                         :    m_jjj:     1.0275    0.37015   [    0.34201     5.6016 ]
                         :     m_lv:     1.0500    0.15582   [    0.29757     2.8989 ]
                         :    m_jlv:     1.0053    0.39478   [    0.41660     5.8799 ]
                         :     m_bb:    0.97464    0.52138   [    0.10941     5.5163 ]
                         :    m_wbb:     1.0296    0.35719   [    0.38878     3.9747 ]
                         :   m_wwbb:    0.95617    0.30368   [    0.44118     4.0728 ]
                         : -----------------------------------------------------------
                         : 
                         : Evaluation results ranked by best signal efficiency and purity (area)
                         : -------------------------------------------------------------------------------------------------------------------
                         : DataSet       MVA                       
                         : Name:         Method:          ROC-integ
                         : dataset       DNN_CPU        : 0.762
                         : dataset       PyKeras        : 0.758
                         : dataset       BDT            : 0.754
                         : dataset       Likelihood     : 0.699
                         : dataset       Fisher         : 0.642
                         : -------------------------------------------------------------------------------------------------------------------
                         : 
                         : Testing efficiency compared to training efficiency (overtraining check)
                         : -------------------------------------------------------------------------------------------------------------------
                         : DataSet              MVA              Signal efficiency: from test sample (from training sample) 
                         : Name:                Method:          @B=0.01             @B=0.10            @B=0.30   
                         : -------------------------------------------------------------------------------------------------------------------
                         : dataset              DNN_CPU        : 0.124 (0.135)       0.408 (0.437)      0.674 (0.706)
                         : dataset              PyKeras        : 0.130 (0.118)       0.414 (0.424)      0.669 (0.676)
                         : dataset              BDT            : 0.098 (0.099)       0.393 (0.402)      0.657 (0.681)
                         : dataset              Likelihood     : 0.070 (0.075)       0.356 (0.363)      0.581 (0.597)
                         : dataset              Fisher         : 0.015 (0.015)       0.121 (0.131)      0.487 (0.506)
                         : -------------------------------------------------------------------------------------------------------------------
                         : 
Dataset:dataset          : Created tree 'TestTree' with 6000 events
                         : 
Dataset:dataset          : Created tree 'TrainTree' with 14000 events
                         : 
Factory                  : ␛[1mThank you for using TMVA!␛[0m
                         : ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m