Running with nthreads  = 4
--- RNNClassification  : Using input file: time_data_t10_d30.root
DataSetInfo              : [dataset] : Added class "Signal"
                         : Add Tree sgn of type Signal with 2000 events
DataSetInfo              : [dataset] : Added class "Background"
                         : Add Tree bkg of type Background with 2000 events
number of variables is 300
vars_time0[0],vars_time0[1],vars_time0[2],vars_time0[3],vars_time0[4],vars_time0[5],vars_time0[6],vars_time0[7],vars_time0[8],vars_time0[9],vars_time0[10],vars_time0[11],vars_time0[12],vars_time0[13],vars_time0[14],vars_time0[15],vars_time0[16],vars_time0[17],vars_time0[18],vars_time0[19],vars_time0[20],vars_time0[21],vars_time0[22],vars_time0[23],vars_time0[24],vars_time0[25],vars_time0[26],vars_time0[27],vars_time0[28],vars_time0[29],vars_time1[0],vars_time1[1],vars_time1[2],vars_time1[3],vars_time1[4],vars_time1[5],vars_time1[6],vars_time1[7],vars_time1[8],vars_time1[9],vars_time1[10],vars_time1[11],vars_time1[12],vars_time1[13],vars_time1[14],vars_time1[15],vars_time1[16],vars_time1[17],vars_time1[18],vars_time1[19],vars_time1[20],vars_time1[21],vars_time1[22],vars_time1[23],vars_time1[24],vars_time1[25],vars_time1[26],vars_time1[27],vars_time1[28],vars_time1[29],vars_time2[0],vars_time2[1],vars_time2[2],vars_time2[3],vars_time2[4],vars_time2[5],vars_time2[6],vars_time2[7],vars_time2[8],vars_time2[9],vars_time2[10],vars_time2[11],vars_time2[12],vars_time2[13],vars_time2[14],vars_time2[15],vars_time2[16],vars_time2[17],vars_time2[18],vars_time2[19],vars_time2[20],vars_time2[21],vars_time2[22],vars_time2[23],vars_time2[24],vars_time2[25],vars_time2[26],vars_time2[27],vars_time2[28],vars_time2[29],vars_time3[0],vars_time3[1],vars_time3[2],vars_time3[3],vars_time3[4],vars_time3[5],vars_time3[6],vars_time3[7],vars_time3[8],vars_time3[9],vars_time3[10],vars_time3[11],vars_time3[12],vars_time3[13],vars_time3[14],vars_time3[15],vars_time3[16],vars_time3[17],vars_time3[18],vars_time3[19],vars_time3[20],vars_time3[21],vars_time3[22],vars_time3[23],vars_time3[24],vars_time3[25],vars_time3[26],vars_time3[27],vars_time3[28],vars_time3[29],vars_time4[0],vars_time4[1],vars_time4[2],vars_time4[3],vars_time4[4],vars_time4[5],vars_time4[6],vars_time4[7],vars_time4[8],vars_time4[9],vars_time4[10],vars_time4[11],vars_time4[12],vars_time4[13],vars_time4[14],vars_time4[15],vars_time4[16],vars_time4[17],vars_time4[18],vars_time4[19],vars_time4[20],vars_time4[21],vars_time4[22],vars_time4[23],vars_time4[24],vars_time4[25],vars_time4[26],vars_time4[27],vars_time4[28],vars_time4[29],vars_time5[0],vars_time5[1],vars_time5[2],vars_time5[3],vars_time5[4],vars_time5[5],vars_time5[6],vars_time5[7],vars_time5[8],vars_time5[9],vars_time5[10],vars_time5[11],vars_time5[12],vars_time5[13],vars_time5[14],vars_time5[15],vars_time5[16],vars_time5[17],vars_time5[18],vars_time5[19],vars_time5[20],vars_time5[21],vars_time5[22],vars_time5[23],vars_time5[24],vars_time5[25],vars_time5[26],vars_time5[27],vars_time5[28],vars_time5[29],vars_time6[0],vars_time6[1],vars_time6[2],vars_time6[3],vars_time6[4],vars_time6[5],vars_time6[6],vars_time6[7],vars_time6[8],vars_time6[9],vars_time6[10],vars_time6[11],vars_time6[12],vars_time6[13],vars_time6[14],vars_time6[15],vars_time6[16],vars_time6[17],vars_time6[18],vars_time6[19],vars_time6[20],vars_time6[21],vars_time6[22],vars_time6[23],vars_time6[24],vars_time6[25],vars_time6[26],vars_time6[27],vars_time6[28],vars_time6[29],vars_time7[0],vars_time7[1],vars_time7[2],vars_time7[3],vars_time7[4],vars_time7[5],vars_time7[6],vars_time7[7],vars_time7[8],vars_time7[9],vars_time7[10],vars_time7[11],vars_time7[12],vars_time7[13],vars_time7[14],vars_time7[15],vars_time7[16],vars_time7[17],vars_time7[18],vars_time7[19],vars_time7[20],vars_time7[21],vars_time7[22],vars_time7[23],vars_time7[24],vars_time7[25],vars_time7[26],vars_time7[27],vars_time7[28],vars_time7[29],vars_time8[0],vars_time8[1],vars_time8[2],vars_time8[3],vars_time8[4],vars_time8[5],vars_time8[6],vars_time8[7],vars_time8[8],vars_time8[9],vars_time8[10],vars_time8[11],vars_time8[12],vars_time8[13],vars_time8[14],vars_time8[15],vars_time8[16],vars_time8[17],vars_time8[18],vars_time8[19],vars_time8[20],vars_time8[21],vars_time8[22],vars_time8[23],vars_time8[24],vars_time8[25],vars_time8[26],vars_time8[27],vars_time8[28],vars_time8[29],vars_time9[0],vars_time9[1],vars_time9[2],vars_time9[3],vars_time9[4],vars_time9[5],vars_time9[6],vars_time9[7],vars_time9[8],vars_time9[9],vars_time9[10],vars_time9[11],vars_time9[12],vars_time9[13],vars_time9[14],vars_time9[15],vars_time9[16],vars_time9[17],vars_time9[18],vars_time9[19],vars_time9[20],vars_time9[21],vars_time9[22],vars_time9[23],vars_time9[24],vars_time9[25],vars_time9[26],vars_time9[27],vars_time9[28],vars_time9[29],
prepared DATA LOADER 
Factory                  : Booking method: ␛[1mTMVA_LSTM␛[0m
                         : 
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIERUNIFORM:ValidationSize=0.2:RandomSeed=1234:InputLayout=10|30:Layout=LSTM|10|30|10|0|1,RESHAPE|FLAT,DENSE|64|TANH,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-2,Regularization=None,MaxEpochs=20,Optimizer=ADAM,DropConfig=0.0+0.+0.+0.:Architecture=CPU"
                         : The following options are set:
                         : - By User:
                         :     <none>
                         : - Default:
                         :     Boost_num: "0" [Number of times the classifier will be boosted]
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIERUNIFORM:ValidationSize=0.2:RandomSeed=1234:InputLayout=10|30:Layout=LSTM|10|30|10|0|1,RESHAPE|FLAT,DENSE|64|TANH,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-2,Regularization=None,MaxEpochs=20,Optimizer=ADAM,DropConfig=0.0+0.+0.+0.:Architecture=CPU"
                         : The following options are set:
                         : - By User:
                         :     V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
                         :     VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
                         :     H: "False" [Print method-specific help message]
                         :     InputLayout: "10|30" [The Layout of the input]
                         :     Layout: "LSTM|10|30|10|0|1,RESHAPE|FLAT,DENSE|64|TANH,LINEAR" [Layout of the network.]
                         :     ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
                         :     WeightInitialization: "XAVIERUNIFORM" [Weight initialization strategy]
                         :     RandomSeed: "1234" [Random seed used for weight initialization and batch shuffling]
                         :     ValidationSize: "0.2" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
                         :     Architecture: "CPU" [Which architecture to perform the training on.]
                         :     TrainingStrategy: "LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-2,Regularization=None,MaxEpochs=20,Optimizer=ADAM,DropConfig=0.0+0.+0.+0." [Defines the training strategies.]
                         : - Default:
                         :     VerbosityLevel: "Default" [Verbosity level]
                         :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
                         :     IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
                         :     BatchLayout: "0|0|0" [The Layout of the batch]
                         : Will now use the CPU architecture with BLAS and IMT support !
Factory                  : Booking method: ␛[1mTMVA_DNN␛[0m
                         : 
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:RandomSeed=0:InputLayout=1|1|300:Layout=DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,MaxEpochs=20DropConfig=0.0+0.+0.+0.,Optimizer=ADAM:CPU"
                         : The following options are set:
                         : - By User:
                         :     <none>
                         : - Default:
                         :     Boost_num: "0" [Number of times the classifier will be boosted]
                         : Parsing option string: 
                         : ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:RandomSeed=0:InputLayout=1|1|300:Layout=DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,MaxEpochs=20DropConfig=0.0+0.+0.+0.,Optimizer=ADAM:CPU"
                         : The following options are set:
                         : - By User:
                         :     V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
                         :     VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
                         :     H: "False" [Print method-specific help message]
                         :     InputLayout: "1|1|300" [The Layout of the input]
                         :     Layout: "DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR" [Layout of the network.]
                         :     ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
                         :     WeightInitialization: "XAVIER" [Weight initialization strategy]
                         :     RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
                         :     Architecture: "CPU" [Which architecture to perform the training on.]
                         :     TrainingStrategy: "LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,MaxEpochs=20DropConfig=0.0+0.+0.+0.,Optimizer=ADAM" [Defines the training strategies.]
                         : - Default:
                         :     VerbosityLevel: "Default" [Verbosity level]
                         :     CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
                         :     IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
                         :     BatchLayout: "0|0|0" [The Layout of the batch]
                         :     ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
                         : Will now use the CPU architecture with BLAS and IMT support !
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 reshape (Reshape)           (None, 10, 30)            0         
                                                                 
 lstm (LSTM)                 (None, 10, 10)            1640      
                                                                 
 flatten (Flatten)           (None, 100)               0         
                                                                 
 dense (Dense)               (None, 64)                6464      
                                                                 
 dense_1 (Dense)             (None, 2)                 130       
                                                                 
=================================================================
Total params: 8234 (32.16 KB)
Trainable params: 8234 (32.16 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
(TString) "python3"[7]
Factory                  : Booking method: ␛[1mPyKeras_LSTM␛[0m
                         : 
                         : Setting up tf.keras
                         : Using TensorFlow version 2
                         : Use Keras version from TensorFlow : tf.keras
                         : Applying GPU option:  gpu_options.allow_growth=True
                         :  Loading Keras Model 
                         : Loaded model from file: model_LSTM.h5
Factory                  : Booking method: ␛[1mBDTG␛[0m
                         : 
                         : the option NegWeightTreatment=InverseBoostNegWeights does not exist for BoostType=Grad
                         : --> change to new default NegWeightTreatment=Pray
                         : Rebuilding Dataset dataset
                         : Building event vectors for type 2 Signal
                         : Dataset[dataset] :  create input formulas for tree sgn
                         : Using variable vars_time0[0] from array expression vars_time0 of size 30
                         : Using variable vars_time1[0] from array expression vars_time1 of size 30
                         : Using variable vars_time2[0] from array expression vars_time2 of size 30
                         : Using variable vars_time3[0] from array expression vars_time3 of size 30
                         : Using variable vars_time4[0] from array expression vars_time4 of size 30
                         : Using variable vars_time5[0] from array expression vars_time5 of size 30
                         : Using variable vars_time6[0] from array expression vars_time6 of size 30
                         : Using variable vars_time7[0] from array expression vars_time7 of size 30
                         : Using variable vars_time8[0] from array expression vars_time8 of size 30
                         : Using variable vars_time9[0] from array expression vars_time9 of size 30
                         : Building event vectors for type 2 Background
                         : Dataset[dataset] :  create input formulas for tree bkg
                         : Using variable vars_time0[0] from array expression vars_time0 of size 30
                         : Using variable vars_time1[0] from array expression vars_time1 of size 30
                         : Using variable vars_time2[0] from array expression vars_time2 of size 30
                         : Using variable vars_time3[0] from array expression vars_time3 of size 30
                         : Using variable vars_time4[0] from array expression vars_time4 of size 30
                         : Using variable vars_time5[0] from array expression vars_time5 of size 30
                         : Using variable vars_time6[0] from array expression vars_time6 of size 30
                         : Using variable vars_time7[0] from array expression vars_time7 of size 30
                         : Using variable vars_time8[0] from array expression vars_time8 of size 30
                         : Using variable vars_time9[0] from array expression vars_time9 of size 30
DataSetFactory           : [dataset] : Number of events in input trees
                         : 
                         : 
                         : Number of training and testing events
                         : ---------------------------------------------------------------------------
                         : Signal     -- training events            : 1600
                         : Signal     -- testing events             : 400
                         : Signal     -- training and testing events: 2000
                         : Background -- training events            : 1600
                         : Background -- testing events             : 400
                         : Background -- training and testing events: 2000
                         : 
Factory                  : ␛[1mTrain all methods␛[0m
Factory                  : Train method: TMVA_LSTM for Classification
                         : 
                         : Start of deep neural network training on CPU using MT,  nthreads = 4
                         : 
                         : *****   Deep Learning Network *****
DEEP NEURAL NETWORK:   Depth = 4  Input = ( 10, 1, 30 )  Batch size = 100  Loss function = C
   Layer 0   LSTM Layer:     (NInput = 30, NState = 10, NTime  = 10 )   Output = ( 100 , 10 , 10 )
   Layer 1   RESHAPE Layer     Input = ( 1 , 10 , 10 )   Output = ( 1 , 100 , 100 ) 
   Layer 2   DENSE Layer:   ( Input =   100 , Width =    64 )  Output = (  1 ,   100 ,    64 )   Activation Function = Tanh
   Layer 3   DENSE Layer:   ( Input =    64 , Width =     1 )  Output = (  1 ,   100 ,     1 )   Activation Function = Identity
                         : Using 2560 events for training and 640 for testing
                         : Compute initial loss  on the validation data 
                         : Training phase 1 of 1:  Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.711598
                         : --------------------------------------------------------------
                         :      Epoch |   Train Err.   Val. Err.  t(s)/epoch   t(s)/Loss   nEvents/s Conv. Steps
                         : --------------------------------------------------------------
                         :    Start epoch iteration ...
                         :          1 Minimum Test error found - save the configuration 
                         :          1 |     0.703307    0.699943    0.674021   0.0443835     3970.54           0
                         :          2 Minimum Test error found - save the configuration 
                         :          2 |     0.690502    0.687676    0.673208   0.0422858     3962.45           0
                         :          3 Minimum Test error found - save the configuration 
                         :          3 |     0.684609    0.683907    0.658033   0.0413818     4054.15           0
                         :          4 |     0.675435    0.686458    0.605577   0.0402586     4422.29           1
                         :          5 |     0.669233    0.686385    0.602015   0.0507857     4535.32           2
                         :          6 Minimum Test error found - save the configuration 
                         :          6 |      0.65044    0.660125    0.578453     0.03964     4639.83           0
                         :          7 Minimum Test error found - save the configuration 
                         :          7 |     0.619836    0.639189    0.573888   0.0394135     4677.49           0
                         :          8 |     0.593749    0.645963     0.57951   0.0392177     4627.13           1
                         :          9 Minimum Test error found - save the configuration 
                         :          9 |      0.57412    0.627973    0.577957   0.0414612     4659.87           0
                         :         10 Minimum Test error found - save the configuration 
                         :         10 |     0.561584    0.620373    0.573563   0.0395163     4681.24           0
                         :         11 Minimum Test error found - save the configuration 
                         :         11 |     0.551318    0.614517    0.563275   0.0402706     4780.08           0
                         :         12 Minimum Test error found - save the configuration 
                         :         12 |     0.531828    0.603185     0.57024   0.0389447     4705.48           0
                         :         13 Minimum Test error found - save the configuration 
                         :         13 |     0.520173    0.602006    0.554572   0.0395899     4854.53           0
                         :         14 Minimum Test error found - save the configuration 
                         :         14 |     0.513479    0.594695    0.539964   0.0384704     4985.11           0
                         :         15 Minimum Test error found - save the configuration 
                         :         15 |     0.508662    0.588429    0.518236   0.0384721     5210.89           0
                         :         16 Minimum Test error found - save the configuration 
                         :         16 |     0.497794    0.586524    0.529928   0.0383958     5086.14           0
                         :         17 Minimum Test error found - save the configuration 
                         :         17 |     0.483461    0.573982    0.529035   0.0388526     5100.14           0
                         :         18 |     0.467142    0.583951    0.522853    0.038284     5159.23           1
                         :         19 Minimum Test error found - save the configuration 
                         :         19 |     0.464105    0.565948    0.517846    0.038727     5217.91           0
                         :         20 |     0.456191    0.590958    0.527795   0.0387247     5111.74           1
                         : 
                         : Elapsed time for training with 3200 events: 11.5 sec         
                         : Evaluate deep neural network on CPU using batches with size = 100
                         : 
TMVA_LSTM                : [dataset] : Evaluation of TMVA_LSTM on training sample (3200 events)
                         : Elapsed time for evaluation of 3200 events: 0.204 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_LSTM.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_TMVA_LSTM.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: TMVA_DNN for Classification
                         : 
                         : Start of deep neural network training on CPU using MT,  nthreads = 4
                         : 
                         : *****   Deep Learning Network *****
DEEP NEURAL NETWORK:   Depth = 4  Input = ( 1, 1, 300 )  Batch size = 256  Loss function = C
   Layer 0   DENSE Layer:   ( Input =   300 , Width =    64 )  Output = (  1 ,   256 ,    64 )   Activation Function = Tanh
   Layer 1   DENSE Layer:   ( Input =    64 , Width =    64 )  Output = (  1 ,   256 ,    64 )   Activation Function = Tanh
   Layer 2   DENSE Layer:   ( Input =    64 , Width =    64 )  Output = (  1 ,   256 ,    64 )   Activation Function = Tanh
   Layer 3   DENSE Layer:   ( Input =    64 , Width =     1 )  Output = (  1 ,   256 ,     1 )   Activation Function = Identity
                         : Using 2560 events for training and 640 for testing
                         : Compute initial loss  on the validation data 
                         : Training phase 1 of 1:  Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.824552
                         : --------------------------------------------------------------
                         :      Epoch |   Train Err.   Val. Err.  t(s)/epoch   t(s)/Loss   nEvents/s Conv. Steps
                         : --------------------------------------------------------------
                         :    Start epoch iteration ...
                         :          1 Minimum Test error found - save the configuration 
                         :          1 |     0.708751    0.711034    0.195129   0.0156247     14261.5           0
                         :          2 Minimum Test error found - save the configuration 
                         :          2 |     0.682893    0.692557    0.204187   0.0269639     14445.1           0
                         :          3 Minimum Test error found - save the configuration 
                         :          3 |     0.675572    0.689862    0.195061   0.0156657     14270.2           0
                         :          4 Minimum Test error found - save the configuration 
                         :          4 |     0.668925    0.685885    0.199981   0.0157764     13897.6           0
                         :          5 |     0.670655    0.686807    0.200425   0.0153581     13832.9           1
                         :          6 Minimum Test error found - save the configuration 
                         :          6 |     0.673117    0.672268     0.19467   0.0154569     14284.7           0
                         :          7 |     0.664283      0.6793    0.190753   0.0149257     14559.7           1
                         :          8 |     0.665731    0.673837    0.189658   0.0150025     14657.5           2
                         :          9 Minimum Test error found - save the configuration 
                         :          9 |     0.662875    0.670804    0.189554   0.0154175     14701.1           0
                         :         10 Minimum Test error found - save the configuration 
                         :         10 |     0.663616    0.669557    0.189286   0.0153006     14713.9           0
                         :         11 |       0.6591    0.685736    0.191093   0.0151745     14552.2           1
                         :         12 |     0.662639    0.672896    0.190401   0.0148937     14586.3           2
                         :         13 |      0.66409    0.672898    0.188189   0.0148917     14772.3           3
                         :         14 Minimum Test error found - save the configuration 
                         :         14 |     0.651067     0.66633    0.188487    0.015044     14759.9           0
                         :         15 |     0.656563    0.674955    0.188246   0.0148206     14761.4           1
                         :         16 |     0.663913    0.677332    0.188908   0.0148985     14711.8           2
                         :         17 |     0.676346    0.681074    0.188399    0.014875       14753           3
                         :         18 |     0.674716     0.67638    0.188433   0.0146547     14731.4           4
                         :         19 |     0.680225    0.688483    0.189215   0.0149448     14689.8           5
                         :         20 |     0.666414    0.685136    0.189106   0.0147046     14678.8           6
                         : 
                         : Elapsed time for training with 3200 events: 3.86 sec         
                         : Evaluate deep neural network on CPU using batches with size = 256
                         : 
TMVA_DNN                 : [dataset] : Evaluation of TMVA_DNN on training sample (3200 events)
                         : Elapsed time for evaluation of 3200 events: 0.1 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_DNN.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_TMVA_DNN.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: PyKeras_LSTM for Classification
                         : 
                         : Split TMVA training data in 2560 training events and 640 validation events
                         : Training Model Summary
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 reshape (Reshape)           (None, 10, 30)            0         
                                                                 
 lstm (LSTM)                 (None, 10, 10)            1640      
                                                                 
 flatten (Flatten)           (None, 100)               0         
                                                                 
 dense (Dense)               (None, 64)                6464      
                                                                 
 dense_1 (Dense)             (None, 2)                 130       
                                                                 
=================================================================
Total params: 8234 (32.16 KB)
Trainable params: 8234 (32.16 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
                         : Option SaveBestOnly: Only model weights with smallest validation loss will be stored
Epoch 1/20
 
 1/26 [>.............................] - ETA: 43s - loss: 0.6892 - accuracy: 0.5800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 7/26 [=======>......................] - ETA: 0s - loss: 0.7074 - accuracy: 0.5114 ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.6974 - accuracy: 0.5360␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/26 [==========================>...] - ETA: 0s - loss: 0.6947 - accuracy: 0.5437
Epoch 1: val_loss improved from inf to 0.68724, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 3s 38ms/step - loss: 0.6945 - accuracy: 0.5465 - val_loss: 0.6872 - val_accuracy: 0.5734
Epoch 2/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.6827 - accuracy: 0.5000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.6782 - accuracy: 0.5690␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/26 [====================>.........] - ETA: 0s - loss: 0.6729 - accuracy: 0.5779
Epoch 2: val_loss improved from 0.68724 to 0.67050, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 9ms/step - loss: 0.6721 - accuracy: 0.5906 - val_loss: 0.6705 - val_accuracy: 0.5859
Epoch 3/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.6459 - accuracy: 0.6300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
 9/26 [=========>....................] - ETA: 0s - loss: 0.6559 - accuracy: 0.6189␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
18/26 [===================>..........] - ETA: 0s - loss: 0.6544 - accuracy: 0.6267␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - ETA: 0s - loss: 0.6526 - accuracy: 0.6262
Epoch 3: val_loss improved from 0.67050 to 0.66067, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 9ms/step - loss: 0.6526 - accuracy: 0.6262 - val_loss: 0.6607 - val_accuracy: 0.5969
Epoch 4/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.6677 - accuracy: 0.6500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.6386 - accuracy: 0.6550␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
18/26 [===================>..........] - ETA: 0s - loss: 0.6346 - accuracy: 0.6533
Epoch 4: val_loss improved from 0.66067 to 0.64557, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 9ms/step - loss: 0.6347 - accuracy: 0.6516 - val_loss: 0.6456 - val_accuracy: 0.6266
Epoch 5/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.6224 - accuracy: 0.6200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.6222 - accuracy: 0.6536␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.6187 - accuracy: 0.6667
Epoch 5: val_loss improved from 0.64557 to 0.63551, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.6172 - accuracy: 0.6707 - val_loss: 0.6355 - val_accuracy: 0.6391
Epoch 6/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.5910 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
12/26 [============>.................] - ETA: 0s - loss: 0.5961 - accuracy: 0.6917␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
23/26 [=========================>....] - ETA: 0s - loss: 0.5955 - accuracy: 0.6887
Epoch 6: val_loss improved from 0.63551 to 0.63017, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 7ms/step - loss: 0.5935 - accuracy: 0.6914 - val_loss: 0.6302 - val_accuracy: 0.6484
Epoch 7/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.6099 - accuracy: 0.7300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
12/26 [============>.................] - ETA: 0s - loss: 0.5760 - accuracy: 0.7042␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
22/26 [========================>.....] - ETA: 0s - loss: 0.5797 - accuracy: 0.6955
Epoch 7: val_loss improved from 0.63017 to 0.61441, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.5797 - accuracy: 0.6934 - val_loss: 0.6144 - val_accuracy: 0.6719
Epoch 8/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.5703 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.5533 - accuracy: 0.7273␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.5571 - accuracy: 0.7267
Epoch 8: val_loss improved from 0.61441 to 0.59441, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.5562 - accuracy: 0.7270 - val_loss: 0.5944 - val_accuracy: 0.6609
Epoch 9/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.5233 - accuracy: 0.7400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.5351 - accuracy: 0.7327␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.5357 - accuracy: 0.7305
Epoch 9: val_loss improved from 0.59441 to 0.58370, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.5371 - accuracy: 0.7297 - val_loss: 0.5837 - val_accuracy: 0.6891
Epoch 10/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.5308 - accuracy: 0.7800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.5168 - accuracy: 0.7482␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.5157 - accuracy: 0.7467
Epoch 10: val_loss improved from 0.58370 to 0.57751, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.5191 - accuracy: 0.7453 - val_loss: 0.5775 - val_accuracy: 0.6922
Epoch 11/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.5450 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.5205 - accuracy: 0.7445␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.5178 - accuracy: 0.7424
Epoch 11: val_loss improved from 0.57751 to 0.56580, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.5079 - accuracy: 0.7492 - val_loss: 0.5658 - val_accuracy: 0.7156
Epoch 12/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.4589 - accuracy: 0.8100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.4901 - accuracy: 0.7682␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.4886 - accuracy: 0.7645
Epoch 12: val_loss improved from 0.56580 to 0.54423, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.4818 - accuracy: 0.7707 - val_loss: 0.5442 - val_accuracy: 0.7250
Epoch 13/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.5428 - accuracy: 0.7600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.4738 - accuracy: 0.7945␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.4592 - accuracy: 0.7965
Epoch 13: val_loss improved from 0.54423 to 0.53169, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.4596 - accuracy: 0.7926 - val_loss: 0.5317 - val_accuracy: 0.7219
Epoch 14/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.4337 - accuracy: 0.7800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.4472 - accuracy: 0.7940␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/26 [====================>.........] - ETA: 0s - loss: 0.4447 - accuracy: 0.7995
Epoch 14: val_loss improved from 0.53169 to 0.51498, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.4515 - accuracy: 0.7883 - val_loss: 0.5150 - val_accuracy: 0.7281
Epoch 15/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.4508 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.4567 - accuracy: 0.7840␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.4366 - accuracy: 0.7965
Epoch 15: val_loss improved from 0.51498 to 0.51294, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.4387 - accuracy: 0.7941 - val_loss: 0.5129 - val_accuracy: 0.7375
Epoch 16/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.3983 - accuracy: 0.8400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.4258 - accuracy: 0.8091␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.4266 - accuracy: 0.8067
Epoch 16: val_loss did not improve from 0.51294
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 7ms/step - loss: 0.4309 - accuracy: 0.7996 - val_loss: 0.5166 - val_accuracy: 0.7547
Epoch 17/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.5213 - accuracy: 0.7300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.4608 - accuracy: 0.7840␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.4397 - accuracy: 0.7950
Epoch 17: val_loss improved from 0.51294 to 0.49944, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.4297 - accuracy: 0.8051 - val_loss: 0.4994 - val_accuracy: 0.7516
Epoch 18/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.3827 - accuracy: 0.8300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.4069 - accuracy: 0.8136␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.4171 - accuracy: 0.8130
Epoch 18: val_loss did not improve from 0.49944
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 7ms/step - loss: 0.4134 - accuracy: 0.8133 - val_loss: 0.5054 - val_accuracy: 0.7641
Epoch 19/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.4493 - accuracy: 0.8500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.4074 - accuracy: 0.8245␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.4047 - accuracy: 0.8257
Epoch 19: val_loss did not improve from 0.49944
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 7ms/step - loss: 0.4051 - accuracy: 0.8215 - val_loss: 0.5008 - val_accuracy: 0.7609
Epoch 20/20
 
 1/26 [>.............................] - ETA: 0s - loss: 0.3973 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.3970 - accuracy: 0.8145␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.4018 - accuracy: 0.8152
Epoch 20: val_loss did not improve from 0.49944
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 7ms/step - loss: 0.3984 - accuracy: 0.8191 - val_loss: 0.5053 - val_accuracy: 0.7688
                         : Getting training history for item:0 name = 'loss'
                         : Getting training history for item:1 name = 'accuracy'
                         : Getting training history for item:2 name = 'val_loss'
                         : Getting training history for item:3 name = 'val_accuracy'
                         : Elapsed time for training with 3200 events: 6.8 sec         
                         : Setting up tf.keras
                         : Using TensorFlow version 2
                         : Use Keras version from TensorFlow : tf.keras
                         : Applying GPU option:  gpu_options.allow_growth=True
                         : Disabled TF eager execution when evaluating model 
                         :  Loading Keras Model 
                         : Loaded model from file: trained_model_LSTM.h5
PyKeras_LSTM             : [dataset] : Evaluation of PyKeras_LSTM on training sample (3200 events)
                         : Elapsed time for evaluation of 3200 events: 0.274 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_PyKeras_LSTM.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_PyKeras_LSTM.class.C␛[0m
Factory                  : Training finished
                         : 
Factory                  : Train method: BDTG for Classification
                         : 
BDTG                     : #events: (reweighted) sig: 1600 bkg: 1600
                         : #events: (unweighted) sig: 1600 bkg: 1600
                         : Training 100 Decision Trees ... patience please
                         : Elapsed time for training with 3200 events: 1.63 sec         
BDTG                     : [dataset] : Evaluation of BDTG on training sample (3200 events)
                         : Elapsed time for evaluation of 3200 events: 0.0276 sec       
                         : Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_BDTG.weights.xml␛[0m
                         : Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_BDTG.class.C␛[0m
                         : data_RNN_CPU.root:/dataset/Method_BDT/BDTG
Factory                  : Training finished
                         : 
                         : Ranking input variables (method specific)...
                         : No variable ranking supplied by classifier: TMVA_LSTM
                         : No variable ranking supplied by classifier: TMVA_DNN
                         : No variable ranking supplied by classifier: PyKeras_LSTM
BDTG                     : Ranking result (top variable is best ranked)
                         : --------------------------------------------
                         : Rank : Variable   : Variable Importance
                         : --------------------------------------------
                         :    1 : vars_time6 : 2.170e-02
                         :    2 : vars_time8 : 2.105e-02
                         :    3 : vars_time7 : 2.066e-02
                         :    4 : vars_time7 : 2.059e-02
                         :    5 : vars_time8 : 2.037e-02
                         :    6 : vars_time8 : 1.868e-02
                         :    7 : vars_time7 : 1.821e-02
                         :    8 : vars_time7 : 1.765e-02
                         :    9 : vars_time8 : 1.758e-02
                         :   10 : vars_time9 : 1.744e-02
                         :   11 : vars_time0 : 1.687e-02
                         :   12 : vars_time7 : 1.679e-02
                         :   13 : vars_time9 : 1.665e-02
                         :   14 : vars_time8 : 1.636e-02
                         :   15 : vars_time9 : 1.630e-02
                         :   16 : vars_time9 : 1.625e-02
                         :   17 : vars_time6 : 1.531e-02
                         :   18 : vars_time8 : 1.504e-02
                         :   19 : vars_time8 : 1.489e-02
                         :   20 : vars_time7 : 1.434e-02
                         :   21 : vars_time5 : 1.381e-02
                         :   22 : vars_time8 : 1.352e-02
                         :   23 : vars_time8 : 1.346e-02
                         :   24 : vars_time6 : 1.311e-02
                         :   25 : vars_time9 : 1.290e-02
                         :   26 : vars_time9 : 1.225e-02
                         :   27 : vars_time0 : 1.224e-02
                         :   28 : vars_time7 : 1.205e-02
                         :   29 : vars_time4 : 1.199e-02
                         :   30 : vars_time5 : 1.193e-02
                         :   31 : vars_time9 : 1.175e-02
                         :   32 : vars_time9 : 1.171e-02
                         :   33 : vars_time5 : 1.139e-02
                         :   34 : vars_time6 : 1.126e-02
                         :   35 : vars_time6 : 1.119e-02
                         :   36 : vars_time5 : 1.114e-02
                         :   37 : vars_time0 : 1.084e-02
                         :   38 : vars_time5 : 1.044e-02
                         :   39 : vars_time8 : 1.018e-02
                         :   40 : vars_time7 : 9.927e-03
                         :   41 : vars_time6 : 9.759e-03
                         :   42 : vars_time6 : 9.755e-03
                         :   43 : vars_time5 : 9.513e-03
                         :   44 : vars_time9 : 9.416e-03
                         :   45 : vars_time0 : 8.988e-03
                         :   46 : vars_time1 : 8.977e-03
                         :   47 : vars_time6 : 8.768e-03
                         :   48 : vars_time7 : 8.760e-03
                         :   49 : vars_time4 : 8.704e-03
                         :   50 : vars_time7 : 8.605e-03
                         :   51 : vars_time9 : 8.554e-03
                         :   52 : vars_time7 : 8.468e-03
                         :   53 : vars_time5 : 8.338e-03
                         :   54 : vars_time9 : 7.890e-03
                         :   55 : vars_time0 : 7.886e-03
                         :   56 : vars_time8 : 7.725e-03
                         :   57 : vars_time9 : 7.667e-03
                         :   58 : vars_time8 : 7.553e-03
                         :   59 : vars_time0 : 7.398e-03
                         :   60 : vars_time9 : 7.212e-03
                         :   61 : vars_time1 : 7.197e-03
                         :   62 : vars_time7 : 6.883e-03
                         :   63 : vars_time6 : 6.815e-03
                         :   64 : vars_time8 : 6.748e-03
                         :   65 : vars_time7 : 6.706e-03
                         :   66 : vars_time1 : 6.696e-03
                         :   67 : vars_time5 : 6.490e-03
                         :   68 : vars_time0 : 6.484e-03
                         :   69 : vars_time5 : 6.466e-03
                         :   70 : vars_time0 : 6.433e-03
                         :   71 : vars_time8 : 6.416e-03
                         :   72 : vars_time0 : 6.403e-03
                         :   73 : vars_time4 : 6.252e-03
                         :   74 : vars_time2 : 6.211e-03
                         :   75 : vars_time6 : 6.208e-03
                         :   76 : vars_time3 : 5.830e-03
                         :   77 : vars_time7 : 5.812e-03
                         :   78 : vars_time9 : 5.769e-03
                         :   79 : vars_time7 : 5.655e-03
                         :   80 : vars_time4 : 5.638e-03
                         :   81 : vars_time1 : 5.569e-03
                         :   82 : vars_time6 : 5.488e-03
                         :   83 : vars_time2 : 5.483e-03
                         :   84 : vars_time4 : 5.474e-03
                         :   85 : vars_time1 : 5.363e-03
                         :   86 : vars_time1 : 5.342e-03
                         :   87 : vars_time3 : 5.206e-03
                         :   88 : vars_time8 : 5.199e-03
                         :   89 : vars_time3 : 5.106e-03
                         :   90 : vars_time5 : 5.035e-03
                         :   91 : vars_time9 : 4.980e-03
                         :   92 : vars_time8 : 4.916e-03
                         :   93 : vars_time3 : 4.771e-03
                         :   94 : vars_time8 : 4.695e-03
                         :   95 : vars_time2 : 4.643e-03
                         :   96 : vars_time4 : 4.628e-03
                         :   97 : vars_time8 : 4.565e-03
                         :   98 : vars_time2 : 4.498e-03
                         :   99 : vars_time3 : 4.108e-03
                         :  100 : vars_time4 : 3.909e-03
                         :  101 : vars_time8 : 3.842e-03
                         :  102 : vars_time7 : 3.552e-03
                         :  103 : vars_time0 : 3.412e-03
                         :  104 : vars_time6 : 3.325e-03
                         :  105 : vars_time0 : 0.000e+00
                         :  106 : vars_time0 : 0.000e+00
                         :  107 : vars_time0 : 0.000e+00
                         :  108 : vars_time0 : 0.000e+00
                         :  109 : vars_time0 : 0.000e+00
                         :  110 : vars_time0 : 0.000e+00
                         :  111 : vars_time0 : 0.000e+00
                         :  112 : vars_time0 : 0.000e+00
                         :  113 : vars_time0 : 0.000e+00
                         :  114 : vars_time0 : 0.000e+00
                         :  115 : vars_time0 : 0.000e+00
                         :  116 : vars_time0 : 0.000e+00
                         :  117 : vars_time0 : 0.000e+00
                         :  118 : vars_time0 : 0.000e+00
                         :  119 : vars_time0 : 0.000e+00
                         :  120 : vars_time0 : 0.000e+00
                         :  121 : vars_time0 : 0.000e+00
                         :  122 : vars_time0 : 0.000e+00
                         :  123 : vars_time0 : 0.000e+00
                         :  124 : vars_time0 : 0.000e+00
                         :  125 : vars_time1 : 0.000e+00
                         :  126 : vars_time1 : 0.000e+00
                         :  127 : vars_time1 : 0.000e+00
                         :  128 : vars_time1 : 0.000e+00
                         :  129 : vars_time1 : 0.000e+00
                         :  130 : vars_time1 : 0.000e+00
                         :  131 : vars_time1 : 0.000e+00
                         :  132 : vars_time1 : 0.000e+00
                         :  133 : vars_time1 : 0.000e+00
                         :  134 : vars_time1 : 0.000e+00
                         :  135 : vars_time1 : 0.000e+00
                         :  136 : vars_time1 : 0.000e+00
                         :  137 : vars_time1 : 0.000e+00
                         :  138 : vars_time1 : 0.000e+00
                         :  139 : vars_time1 : 0.000e+00
                         :  140 : vars_time1 : 0.000e+00
                         :  141 : vars_time1 : 0.000e+00
                         :  142 : vars_time1 : 0.000e+00
                         :  143 : vars_time1 : 0.000e+00
                         :  144 : vars_time1 : 0.000e+00
                         :  145 : vars_time1 : 0.000e+00
                         :  146 : vars_time1 : 0.000e+00
                         :  147 : vars_time1 : 0.000e+00
                         :  148 : vars_time1 : 0.000e+00
                         :  149 : vars_time2 : 0.000e+00
                         :  150 : vars_time2 : 0.000e+00
                         :  151 : vars_time2 : 0.000e+00
                         :  152 : vars_time2 : 0.000e+00
                         :  153 : vars_time2 : 0.000e+00
                         :  154 : vars_time2 : 0.000e+00
                         :  155 : vars_time2 : 0.000e+00
                         :  156 : vars_time2 : 0.000e+00
                         :  157 : vars_time2 : 0.000e+00
                         :  158 : vars_time2 : 0.000e+00
                         :  159 : vars_time2 : 0.000e+00
                         :  160 : vars_time2 : 0.000e+00
                         :  161 : vars_time2 : 0.000e+00
                         :  162 : vars_time2 : 0.000e+00
                         :  163 : vars_time2 : 0.000e+00
                         :  164 : vars_time2 : 0.000e+00
                         :  165 : vars_time2 : 0.000e+00
                         :  166 : vars_time2 : 0.000e+00
                         :  167 : vars_time2 : 0.000e+00
                         :  168 : vars_time2 : 0.000e+00
                         :  169 : vars_time2 : 0.000e+00
                         :  170 : vars_time2 : 0.000e+00
                         :  171 : vars_time2 : 0.000e+00
                         :  172 : vars_time2 : 0.000e+00
                         :  173 : vars_time2 : 0.000e+00
                         :  174 : vars_time2 : 0.000e+00
                         :  175 : vars_time3 : 0.000e+00
                         :  176 : vars_time3 : 0.000e+00
                         :  177 : vars_time3 : 0.000e+00
                         :  178 : vars_time3 : 0.000e+00
                         :  179 : vars_time3 : 0.000e+00
                         :  180 : vars_time3 : 0.000e+00
                         :  181 : vars_time3 : 0.000e+00
                         :  182 : vars_time3 : 0.000e+00
                         :  183 : vars_time3 : 0.000e+00
                         :  184 : vars_time3 : 0.000e+00
                         :  185 : vars_time3 : 0.000e+00
                         :  186 : vars_time3 : 0.000e+00
                         :  187 : vars_time3 : 0.000e+00
                         :  188 : vars_time3 : 0.000e+00
                         :  189 : vars_time3 : 0.000e+00
                         :  190 : vars_time3 : 0.000e+00
                         :  191 : vars_time3 : 0.000e+00
                         :  192 : vars_time3 : 0.000e+00
                         :  193 : vars_time3 : 0.000e+00
                         :  194 : vars_time3 : 0.000e+00
                         :  195 : vars_time3 : 0.000e+00
                         :  196 : vars_time3 : 0.000e+00
                         :  197 : vars_time3 : 0.000e+00
                         :  198 : vars_time3 : 0.000e+00
                         :  199 : vars_time3 : 0.000e+00
                         :  200 : vars_time4 : 0.000e+00
                         :  201 : vars_time4 : 0.000e+00
                         :  202 : vars_time4 : 0.000e+00
                         :  203 : vars_time4 : 0.000e+00
                         :  204 : vars_time4 : 0.000e+00
                         :  205 : vars_time4 : 0.000e+00
                         :  206 : vars_time4 : 0.000e+00
                         :  207 : vars_time4 : 0.000e+00
                         :  208 : vars_time4 : 0.000e+00
                         :  209 : vars_time4 : 0.000e+00
                         :  210 : vars_time4 : 0.000e+00
                         :  211 : vars_time4 : 0.000e+00
                         :  212 : vars_time4 : 0.000e+00
                         :  213 : vars_time4 : 0.000e+00
                         :  214 : vars_time4 : 0.000e+00
                         :  215 : vars_time4 : 0.000e+00
                         :  216 : vars_time4 : 0.000e+00
                         :  217 : vars_time4 : 0.000e+00
                         :  218 : vars_time4 : 0.000e+00
                         :  219 : vars_time4 : 0.000e+00
                         :  220 : vars_time4 : 0.000e+00
                         :  221 : vars_time4 : 0.000e+00
                         :  222 : vars_time4 : 0.000e+00
                         :  223 : vars_time5 : 0.000e+00
                         :  224 : vars_time5 : 0.000e+00
                         :  225 : vars_time5 : 0.000e+00
                         :  226 : vars_time5 : 0.000e+00
                         :  227 : vars_time5 : 0.000e+00
                         :  228 : vars_time5 : 0.000e+00
                         :  229 : vars_time5 : 0.000e+00
                         :  230 : vars_time5 : 0.000e+00
                         :  231 : vars_time5 : 0.000e+00
                         :  232 : vars_time5 : 0.000e+00
                         :  233 : vars_time5 : 0.000e+00
                         :  234 : vars_time5 : 0.000e+00
                         :  235 : vars_time5 : 0.000e+00
                         :  236 : vars_time5 : 0.000e+00
                         :  237 : vars_time5 : 0.000e+00
                         :  238 : vars_time5 : 0.000e+00
                         :  239 : vars_time5 : 0.000e+00
                         :  240 : vars_time5 : 0.000e+00
                         :  241 : vars_time5 : 0.000e+00
                         :  242 : vars_time5 : 0.000e+00
                         :  243 : vars_time6 : 0.000e+00
                         :  244 : vars_time6 : 0.000e+00
                         :  245 : vars_time6 : 0.000e+00
                         :  246 : vars_time6 : 0.000e+00
                         :  247 : vars_time6 : 0.000e+00
                         :  248 : vars_time6 : 0.000e+00
                         :  249 : vars_time6 : 0.000e+00
                         :  250 : vars_time6 : 0.000e+00
                         :  251 : vars_time6 : 0.000e+00
                         :  252 : vars_time6 : 0.000e+00
                         :  253 : vars_time6 : 0.000e+00
                         :  254 : vars_time6 : 0.000e+00
                         :  255 : vars_time6 : 0.000e+00
                         :  256 : vars_time6 : 0.000e+00
                         :  257 : vars_time6 : 0.000e+00
                         :  258 : vars_time6 : 0.000e+00
                         :  259 : vars_time6 : 0.000e+00
                         :  260 : vars_time6 : 0.000e+00
                         :  261 : vars_time7 : 0.000e+00
                         :  262 : vars_time7 : 0.000e+00
                         :  263 : vars_time7 : 0.000e+00
                         :  264 : vars_time7 : 0.000e+00
                         :  265 : vars_time7 : 0.000e+00
                         :  266 : vars_time7 : 0.000e+00
                         :  267 : vars_time7 : 0.000e+00
                         :  268 : vars_time7 : 0.000e+00
                         :  269 : vars_time7 : 0.000e+00
                         :  270 : vars_time7 : 0.000e+00
                         :  271 : vars_time7 : 0.000e+00
                         :  272 : vars_time7 : 0.000e+00
                         :  273 : vars_time7 : 0.000e+00
                         :  274 : vars_time7 : 0.000e+00
                         :  275 : vars_time8 : 0.000e+00
                         :  276 : vars_time8 : 0.000e+00
                         :  277 : vars_time8 : 0.000e+00
                         :  278 : vars_time8 : 0.000e+00
                         :  279 : vars_time8 : 0.000e+00
                         :  280 : vars_time8 : 0.000e+00
                         :  281 : vars_time8 : 0.000e+00
                         :  282 : vars_time8 : 0.000e+00
                         :  283 : vars_time8 : 0.000e+00
                         :  284 : vars_time8 : 0.000e+00
                         :  285 : vars_time8 : 0.000e+00
                         :  286 : vars_time9 : 0.000e+00
                         :  287 : vars_time9 : 0.000e+00
                         :  288 : vars_time9 : 0.000e+00
                         :  289 : vars_time9 : 0.000e+00
                         :  290 : vars_time9 : 0.000e+00
                         :  291 : vars_time9 : 0.000e+00
                         :  292 : vars_time9 : 0.000e+00
                         :  293 : vars_time9 : 0.000e+00
                         :  294 : vars_time9 : 0.000e+00
                         :  295 : vars_time9 : 0.000e+00
                         :  296 : vars_time9 : 0.000e+00
                         :  297 : vars_time9 : 0.000e+00
                         :  298 : vars_time9 : 0.000e+00
                         :  299 : vars_time9 : 0.000e+00
                         :  300 : vars_time9 : 0.000e+00
                         : --------------------------------------------
TH1.Print Name  = TrainingHistory_TMVA_LSTM_trainingError, Entries= 0, Total sum= 11.417
TH1.Print Name  = TrainingHistory_TMVA_LSTM_valError, Entries= 0, Total sum= 12.5422
TH1.Print Name  = TrainingHistory_TMVA_DNN_trainingError, Entries= 0, Total sum= 13.3915
TH1.Print Name  = TrainingHistory_TMVA_DNN_valError, Entries= 0, Total sum= 13.6131
TH1.Print Name  = TrainingHistory_PyKeras_LSTM_'accuracy', Entries= 0, Total sum= 14.6258
TH1.Print Name  = TrainingHistory_PyKeras_LSTM_'loss', Entries= 0, Total sum= 10.4739
TH1.Print Name  = TrainingHistory_PyKeras_LSTM_'val_accuracy', Entries= 0, Total sum= 13.8125
TH1.Print Name  = TrainingHistory_PyKeras_LSTM_'val_loss', Entries= 0, Total sum= 11.4968
Factory                  : === Destroy and recreate all methods via weight files for testing ===
                         : 
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_LSTM.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_DNN.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_PyKeras_LSTM.weights.xml␛[0m
                         : Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_BDTG.weights.xml␛[0m
nthreads  = 4
Factory                  : ␛[1mTest all methods␛[0m
Factory                  : Test method: TMVA_LSTM for Classification performance
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 800
                         : 
TMVA_LSTM                : [dataset] : Evaluation of TMVA_LSTM on testing sample (800 events)
                         : Elapsed time for evaluation of 800 events: 0.0469 sec       
Factory                  : Test method: TMVA_DNN for Classification performance
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 800
                         : 
TMVA_DNN                 : [dataset] : Evaluation of TMVA_DNN on testing sample (800 events)
                         : Elapsed time for evaluation of 800 events: 0.0199 sec       
Factory                  : Test method: PyKeras_LSTM for Classification performance
                         : 
                         : Setting up tf.keras
                         : Using TensorFlow version 2
                         : Use Keras version from TensorFlow : tf.keras
                         : Applying GPU option:  gpu_options.allow_growth=True
                         : Disabled TF eager execution when evaluating model 
                         :  Loading Keras Model 
                         : Loaded model from file: trained_model_LSTM.h5
PyKeras_LSTM             : [dataset] : Evaluation of PyKeras_LSTM on testing sample (800 events)
                         : Elapsed time for evaluation of 800 events: 0.233 sec       
Factory                  : Test method: BDTG for Classification performance
                         : 
BDTG                     : [dataset] : Evaluation of BDTG on testing sample (800 events)
                         : Elapsed time for evaluation of 800 events: 0.0141 sec       
Factory                  : ␛[1mEvaluate all methods␛[0m
Factory                  : Evaluate classifier: TMVA_LSTM
                         : 
TMVA_LSTM                : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 1000
                         : 
                         : Dataset[dataset] :  variable plots are not produces ! The number of variables is 300 , it is larger than 200
Factory                  : Evaluate classifier: TMVA_DNN
                         : 
TMVA_DNN                 : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Evaluate deep neural network on CPU using batches with size = 1000
                         : 
                         : Dataset[dataset] :  variable plots are not produces ! The number of variables is 300 , it is larger than 200
Factory                  : Evaluate classifier: PyKeras_LSTM
                         : 
PyKeras_LSTM             : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Dataset[dataset] :  variable plots are not produces ! The number of variables is 300 , it is larger than 200
Factory                  : Evaluate classifier: BDTG
                         : 
BDTG                     : [dataset] : Loop over test events and fill histograms with classifier response...
                         : 
                         : Dataset[dataset] :  variable plots are not produces ! The number of variables is 300 , it is larger than 200
                         : 
                         : Evaluation results ranked by best signal efficiency and purity (area)
                         : -------------------------------------------------------------------------------------------------------------------
                         : DataSet       MVA                       
                         : Name:         Method:          ROC-integ
                         : dataset       BDTG           : 0.849
                         : dataset       PyKeras_LSTM   : 0.848
                         : dataset       TMVA_LSTM      : 0.842
                         : dataset       TMVA_DNN       : 0.665
                         : -------------------------------------------------------------------------------------------------------------------
                         : 
                         : Testing efficiency compared to training efficiency (overtraining check)
                         : -------------------------------------------------------------------------------------------------------------------
                         : DataSet              MVA              Signal efficiency: from test sample (from training sample) 
                         : Name:                Method:          @B=0.01             @B=0.10            @B=0.30   
                         : -------------------------------------------------------------------------------------------------------------------
                         : dataset              BDTG           : 0.190 (0.315)       0.565 (0.663)      0.844 (0.883)
                         : dataset              PyKeras_LSTM   : 0.225 (0.265)       0.535 (0.590)      0.835 (0.861)
                         : dataset              TMVA_LSTM      : 0.135 (0.205)       0.542 (0.559)      0.813 (0.813)
                         : dataset              TMVA_DNN       : 0.022 (0.026)       0.240 (0.256)      0.545 (0.521)
                         : -------------------------------------------------------------------------------------------------------------------
                         : 
Dataset:dataset          : Created tree 'TestTree' with 800 events
                         : 
Dataset:dataset          : Created tree 'TrainTree' with 3200 events
                         : 
Factory                  : ␛[1mThank you for using TMVA!␛[0m
                         : ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m