Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TMVA_RNN_Classification.C File Reference

Detailed Description

View in nbviewer Open in SWAN
TMVA Classification Example Using a Recurrent Neural Network

This is an example of using a RNN in TMVA. We do classification using a toy time dependent data set that is generated when running this example macro

Running with nthreads = 4
--- RNNClassification : Using input file: time_data_t10_d30.root
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sgn of type Signal with 2000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg of type Background with 2000 events
number of variables is 300
vars_time0[0],vars_time0[1],vars_time0[2],vars_time0[3],vars_time0[4],vars_time0[5],vars_time0[6],vars_time0[7],vars_time0[8],vars_time0[9],vars_time0[10],vars_time0[11],vars_time0[12],vars_time0[13],vars_time0[14],vars_time0[15],vars_time0[16],vars_time0[17],vars_time0[18],vars_time0[19],vars_time0[20],vars_time0[21],vars_time0[22],vars_time0[23],vars_time0[24],vars_time0[25],vars_time0[26],vars_time0[27],vars_time0[28],vars_time0[29],vars_time1[0],vars_time1[1],vars_time1[2],vars_time1[3],vars_time1[4],vars_time1[5],vars_time1[6],vars_time1[7],vars_time1[8],vars_time1[9],vars_time1[10],vars_time1[11],vars_time1[12],vars_time1[13],vars_time1[14],vars_time1[15],vars_time1[16],vars_time1[17],vars_time1[18],vars_time1[19],vars_time1[20],vars_time1[21],vars_time1[22],vars_time1[23],vars_time1[24],vars_time1[25],vars_time1[26],vars_time1[27],vars_time1[28],vars_time1[29],vars_time2[0],vars_time2[1],vars_time2[2],vars_time2[3],vars_time2[4],vars_time2[5],vars_time2[6],vars_time2[7],vars_time2[8],vars_time2[9],vars_time2[10],vars_time2[11],vars_time2[12],vars_time2[13],vars_time2[14],vars_time2[15],vars_time2[16],vars_time2[17],vars_time2[18],vars_time2[19],vars_time2[20],vars_time2[21],vars_time2[22],vars_time2[23],vars_time2[24],vars_time2[25],vars_time2[26],vars_time2[27],vars_time2[28],vars_time2[29],vars_time3[0],vars_time3[1],vars_time3[2],vars_time3[3],vars_time3[4],vars_time3[5],vars_time3[6],vars_time3[7],vars_time3[8],vars_time3[9],vars_time3[10],vars_time3[11],vars_time3[12],vars_time3[13],vars_time3[14],vars_time3[15],vars_time3[16],vars_time3[17],vars_time3[18],vars_time3[19],vars_time3[20],vars_time3[21],vars_time3[22],vars_time3[23],vars_time3[24],vars_time3[25],vars_time3[26],vars_time3[27],vars_time3[28],vars_time3[29],vars_time4[0],vars_time4[1],vars_time4[2],vars_time4[3],vars_time4[4],vars_time4[5],vars_time4[6],vars_time4[7],vars_time4[8],vars_time4[9],vars_time4[10],vars_time4[11],vars_time4[12],vars_time4[13],vars_time4[14],vars_time4[15],vars_time4[16],vars_time4[17],vars_time4[18],vars_time4[19],vars_time4[20],vars_time4[21],vars_time4[22],vars_time4[23],vars_time4[24],vars_time4[25],vars_time4[26],vars_time4[27],vars_time4[28],vars_time4[29],vars_time5[0],vars_time5[1],vars_time5[2],vars_time5[3],vars_time5[4],vars_time5[5],vars_time5[6],vars_time5[7],vars_time5[8],vars_time5[9],vars_time5[10],vars_time5[11],vars_time5[12],vars_time5[13],vars_time5[14],vars_time5[15],vars_time5[16],vars_time5[17],vars_time5[18],vars_time5[19],vars_time5[20],vars_time5[21],vars_time5[22],vars_time5[23],vars_time5[24],vars_time5[25],vars_time5[26],vars_time5[27],vars_time5[28],vars_time5[29],vars_time6[0],vars_time6[1],vars_time6[2],vars_time6[3],vars_time6[4],vars_time6[5],vars_time6[6],vars_time6[7],vars_time6[8],vars_time6[9],vars_time6[10],vars_time6[11],vars_time6[12],vars_time6[13],vars_time6[14],vars_time6[15],vars_time6[16],vars_time6[17],vars_time6[18],vars_time6[19],vars_time6[20],vars_time6[21],vars_time6[22],vars_time6[23],vars_time6[24],vars_time6[25],vars_time6[26],vars_time6[27],vars_time6[28],vars_time6[29],vars_time7[0],vars_time7[1],vars_time7[2],vars_time7[3],vars_time7[4],vars_time7[5],vars_time7[6],vars_time7[7],vars_time7[8],vars_time7[9],vars_time7[10],vars_time7[11],vars_time7[12],vars_time7[13],vars_time7[14],vars_time7[15],vars_time7[16],vars_time7[17],vars_time7[18],vars_time7[19],vars_time7[20],vars_time7[21],vars_time7[22],vars_time7[23],vars_time7[24],vars_time7[25],vars_time7[26],vars_time7[27],vars_time7[28],vars_time7[29],vars_time8[0],vars_time8[1],vars_time8[2],vars_time8[3],vars_time8[4],vars_time8[5],vars_time8[6],vars_time8[7],vars_time8[8],vars_time8[9],vars_time8[10],vars_time8[11],vars_time8[12],vars_time8[13],vars_time8[14],vars_time8[15],vars_time8[16],vars_time8[17],vars_time8[18],vars_time8[19],vars_time8[20],vars_time8[21],vars_time8[22],vars_time8[23],vars_time8[24],vars_time8[25],vars_time8[26],vars_time8[27],vars_time8[28],vars_time8[29],vars_time9[0],vars_time9[1],vars_time9[2],vars_time9[3],vars_time9[4],vars_time9[5],vars_time9[6],vars_time9[7],vars_time9[8],vars_time9[9],vars_time9[10],vars_time9[11],vars_time9[12],vars_time9[13],vars_time9[14],vars_time9[15],vars_time9[16],vars_time9[17],vars_time9[18],vars_time9[19],vars_time9[20],vars_time9[21],vars_time9[22],vars_time9[23],vars_time9[24],vars_time9[25],vars_time9[26],vars_time9[27],vars_time9[28],vars_time9[29],
prepared DATA LOADER
Factory : Booking method: ␛[1mTMVA_LSTM␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIERUNIFORM:ValidationSize=0.2:RandomSeed=1234:InputLayout=10|30:Layout=LSTM|10|30|10|0|1,RESHAPE|FLAT,DENSE|64|TANH,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-2,Regularization=None,MaxEpochs=20,Optimizer=ADAM,DropConfig=0.0+0.+0.+0.:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIERUNIFORM:ValidationSize=0.2:RandomSeed=1234:InputLayout=10|30:Layout=LSTM|10|30|10|0|1,RESHAPE|FLAT,DENSE|64|TANH,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-2,Regularization=None,MaxEpochs=20,Optimizer=ADAM,DropConfig=0.0+0.+0.+0.:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "10|30" [The Layout of the input]
: Layout: "LSTM|10|30|10|0|1,RESHAPE|FLAT,DENSE|64|TANH,LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIERUNIFORM" [Weight initialization strategy]
: RandomSeed: "1234" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "0.2" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-2,Regularization=None,MaxEpochs=20,Optimizer=ADAM,DropConfig=0.0+0.+0.+0." [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: BatchLayout: "0|0|0" [The Layout of the batch]
: Will now use the CPU architecture with BLAS and IMT support !
Factory : Booking method: ␛[1mTMVA_DNN␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:RandomSeed=0:InputLayout=1|1|300:Layout=DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,MaxEpochs=20DropConfig=0.0+0.+0.+0.,Optimizer=ADAM:CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:RandomSeed=0:InputLayout=1|1|300:Layout=DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,MaxEpochs=20DropConfig=0.0+0.+0.+0.,Optimizer=ADAM:CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "1|1|300" [The Layout of the input]
: Layout: "DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.0,Repetitions=1,ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,MaxEpochs=20DropConfig=0.0+0.+0.+0.,Optimizer=ADAM" [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: BatchLayout: "0|0|0" [The Layout of the batch]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Will now use the CPU architecture with BLAS and IMT support !
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape (Reshape) (None, 10, 30) 0
lstm (LSTM) (None, 10, 10) 1640
flatten (Flatten) (None, 100) 0
dense (Dense) (None, 64) 6464
dense_1 (Dense) (None, 2) 130
=================================================================
Total params: 8234 (32.16 KB)
Trainable params: 8234 (32.16 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
(TString) "python3"[7]
Factory : Booking method: ␛[1mPyKeras_LSTM␛[0m
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Loading Keras Model
: Loaded model from file: model_LSTM.h5
Factory : Booking method: ␛[1mBDTG␛[0m
:
: the option NegWeightTreatment=InverseBoostNegWeights does not exist for BoostType=Grad
: --> change to new default NegWeightTreatment=Pray
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree sgn
: Using variable vars_time0[0] from array expression vars_time0 of size 30
: Using variable vars_time1[0] from array expression vars_time1 of size 30
: Using variable vars_time2[0] from array expression vars_time2 of size 30
: Using variable vars_time3[0] from array expression vars_time3 of size 30
: Using variable vars_time4[0] from array expression vars_time4 of size 30
: Using variable vars_time5[0] from array expression vars_time5 of size 30
: Using variable vars_time6[0] from array expression vars_time6 of size 30
: Using variable vars_time7[0] from array expression vars_time7 of size 30
: Using variable vars_time8[0] from array expression vars_time8 of size 30
: Using variable vars_time9[0] from array expression vars_time9 of size 30
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree bkg
: Using variable vars_time0[0] from array expression vars_time0 of size 30
: Using variable vars_time1[0] from array expression vars_time1 of size 30
: Using variable vars_time2[0] from array expression vars_time2 of size 30
: Using variable vars_time3[0] from array expression vars_time3 of size 30
: Using variable vars_time4[0] from array expression vars_time4 of size 30
: Using variable vars_time5[0] from array expression vars_time5 of size 30
: Using variable vars_time6[0] from array expression vars_time6 of size 30
: Using variable vars_time7[0] from array expression vars_time7 of size 30
: Using variable vars_time8[0] from array expression vars_time8 of size 30
: Using variable vars_time9[0] from array expression vars_time9 of size 30
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 1600
: Signal -- testing events : 400
: Signal -- training and testing events: 2000
: Background -- training events : 1600
: Background -- testing events : 400
: Background -- training and testing events: 2000
:
Factory : ␛[1mTrain all methods␛[0m
Factory : Train method: TMVA_LSTM for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 4
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 4 Input = ( 10, 1, 30 ) Batch size = 100 Loss function = C
Layer 0 LSTM Layer: (NInput = 30, NState = 10, NTime = 10 ) Output = ( 100 , 10 , 10 )
Layer 1 RESHAPE Layer Input = ( 1 , 10 , 10 ) Output = ( 1 , 100 , 100 )
Layer 2 DENSE Layer: ( Input = 100 , Width = 64 ) Output = ( 1 , 100 , 64 ) Activation Function = Tanh
Layer 3 DENSE Layer: ( Input = 64 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity
: Using 2560 events for training and 640 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.719534
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.696578 0.703988 0.625358 0.0413623 4280.85 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.688852 0.7008 0.63761 0.0420092 4197.44 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.683903 0.698644 0.642198 0.0418757 4164.43 0
: 4 Minimum Test error found - save the configuration
: 4 | 0.677747 0.690606 0.647998 0.0419332 4124.97 0
: 5 Minimum Test error found - save the configuration
: 5 | 0.670178 0.679521 0.639402 0.041341 4180.18 0
: 6 | 0.652901 0.680636 0.642973 0.0410292 4153.21 1
: 7 Minimum Test error found - save the configuration
: 7 | 0.632044 0.667283 0.641181 0.0410635 4165.85 0
: 8 Minimum Test error found - save the configuration
: 8 | 0.622784 0.65771 0.625795 0.041131 4275.96 0
: 9 Minimum Test error found - save the configuration
: 9 | 0.608428 0.647322 0.619387 0.0410193 4322.51 0
: 10 Minimum Test error found - save the configuration
: 10 | 0.602143 0.633309 0.62122 0.0411397 4309.75 0
: 11 | 0.589134 0.636902 0.620814 0.0412953 4313.93 1
: 12 Minimum Test error found - save the configuration
: 12 | 0.583797 0.616575 0.615793 0.0410483 4349.76 0
: 13 Minimum Test error found - save the configuration
: 13 | 0.569365 0.61045 0.619859 0.0408925 4318.04 0
: 14 Minimum Test error found - save the configuration
: 14 | 0.559693 0.591376 0.609669 0.0409024 4395.47 0
: 15 Minimum Test error found - save the configuration
: 15 | 0.551081 0.5844 0.608058 0.0409592 4408.4 0
: 16 Minimum Test error found - save the configuration
: 16 | 0.541734 0.57989 0.611892 0.0405206 4375.43 0
: 17 Minimum Test error found - save the configuration
: 17 | 0.547568 0.573661 0.602041 0.0406891 4453.53 0
: 18 | 0.530949 0.577749 0.6099 0.0408108 4392.98 1
: 19 Minimum Test error found - save the configuration
: 19 | 0.519762 0.546232 0.608269 0.0403193 4401.8 0
: 20 | 0.508248 0.557164 0.595853 0.0401196 4498.56 1
:
: Elapsed time for training with 3200 events: 12.5 sec
: Evaluate deep neural network on CPU using batches with size = 100
:
TMVA_LSTM : [dataset] : Evaluation of TMVA_LSTM on training sample (3200 events)
: Elapsed time for evaluation of 3200 events: 0.216 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_LSTM.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_TMVA_LSTM.class.C␛[0m
Factory : Training finished
:
Factory : Train method: TMVA_DNN for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 4
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 4 Input = ( 1, 1, 300 ) Batch size = 256 Loss function = C
Layer 0 DENSE Layer: ( Input = 300 , Width = 64 ) Output = ( 1 , 256 , 64 ) Activation Function = Tanh
Layer 1 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 256 , 64 ) Activation Function = Tanh
Layer 2 DENSE Layer: ( Input = 64 , Width = 64 ) Output = ( 1 , 256 , 64 ) Activation Function = Tanh
Layer 3 DENSE Layer: ( Input = 64 , Width = 1 ) Output = ( 1 , 256 , 1 ) Activation Function = Identity
: Using 2560 events for training and 640 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 0.839395
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.740708 0.697487 0.1974 0.0159097 14105.4 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.682784 0.687028 0.195573 0.0159195 14249.7 0
: 3 | 0.672843 0.688968 0.197868 0.0158741 14066.4 1
: 4 | 0.67029 0.687583 0.210681 0.0158459 13139.3 2
: 5 Minimum Test error found - save the configuration
: 5 | 0.674362 0.672698 0.197214 0.0158838 14117.9 0
: 6 Minimum Test error found - save the configuration
: 6 | 0.664037 0.670433 0.195572 0.0159363 14251.1 0
: 7 | 0.665152 0.670631 0.194304 0.0154463 14313 1
: 8 | 0.664493 0.682183 0.192735 0.0154894 14443.2 2
: 9 Minimum Test error found - save the configuration
: 9 | 0.653363 0.661246 0.193471 0.0156858 14399.4 0
: 10 Minimum Test error found - save the configuration
: 10 | 0.65421 0.66037 0.194419 0.0156613 14321.1 0
: 11 | 0.668499 0.690932 0.192647 0.0153012 14435 1
: 12 | 0.664946 0.671785 0.193038 0.0154584 14416.1 2
: 13 | 0.664661 0.671073 0.19302 0.0153319 14407.2 3
: 14 Minimum Test error found - save the configuration
: 14 | 0.661628 0.651117 0.194151 0.0157562 14350.2 0
: 15 | 0.658406 0.667652 0.194701 0.0153132 14270.7 1
: 16 Minimum Test error found - save the configuration
: 16 | 0.653411 0.64629 0.194084 0.0158876 14366.2 0
: 17 Minimum Test error found - save the configuration
: 17 | 0.636339 0.636469 0.193334 0.0156247 14405.6 0
: 18 | 0.633836 0.646756 0.210432 0.0155557 13136.5 1
: 19 | 0.639177 0.646111 0.195074 0.0156559 14268.4 2
: 20 | 0.638409 0.648298 0.193681 0.0153235 14353.2 3
:
: Elapsed time for training with 3200 events: 3.94 sec
: Evaluate deep neural network on CPU using batches with size = 256
:
TMVA_DNN : [dataset] : Evaluation of TMVA_DNN on training sample (3200 events)
: Elapsed time for evaluation of 3200 events: 0.102 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_DNN.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_TMVA_DNN.class.C␛[0m
Factory : Training finished
:
Factory : Train method: PyKeras_LSTM for Classification
:
: Split TMVA training data in 2560 training events and 640 validation events
: Training Model Summary
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape (Reshape) (None, 10, 30) 0
lstm (LSTM) (None, 10, 10) 1640
flatten (Flatten) (None, 100) 0
dense (Dense) (None, 64) 6464
dense_1 (Dense) (None, 2) 130
=================================================================
Total params: 8234 (32.16 KB)
Trainable params: 8234 (32.16 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
: Option SaveBestOnly: Only model weights with smallest validation loss will be stored
Epoch 1/20
1/26 [>.............................] - ETA: 42s - loss: 0.7426 - accuracy: 0.4700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
7/26 [=======>......................] - ETA: 0s - loss: 0.7216 - accuracy: 0.5014 ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
14/26 [===============>..............] - ETA: 0s - loss: 0.7152 - accuracy: 0.5114␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
22/26 [========================>.....] - ETA: 0s - loss: 0.7084 - accuracy: 0.5214
Epoch 1: val_loss improved from inf to 0.68426, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 3s 38ms/step - loss: 0.7064 - accuracy: 0.5277 - val_loss: 0.6843 - val_accuracy: 0.5766
Epoch 2/20
1/26 [>.............................] - ETA: 0s - loss: 0.6804 - accuracy: 0.5600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.6878 - accuracy: 0.5430␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.6822 - accuracy: 0.5706␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - ETA: 0s - loss: 0.6783 - accuracy: 0.5824
Epoch 2: val_loss improved from 0.68426 to 0.65826, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 10ms/step - loss: 0.6783 - accuracy: 0.5824 - val_loss: 0.6583 - val_accuracy: 0.5891
Epoch 3/20
1/26 [>.............................] - ETA: 0s - loss: 0.6394 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
8/26 [========>.....................] - ETA: 0s - loss: 0.6538 - accuracy: 0.6187␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.6558 - accuracy: 0.6133␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.6519 - accuracy: 0.6224
Epoch 3: val_loss improved from 0.65826 to 0.63096, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 11ms/step - loss: 0.6501 - accuracy: 0.6262 - val_loss: 0.6310 - val_accuracy: 0.6687
Epoch 4/20
1/26 [>.............................] - ETA: 0s - loss: 0.6154 - accuracy: 0.7000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
8/26 [========>.....................] - ETA: 0s - loss: 0.6266 - accuracy: 0.6787␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
14/26 [===============>..............] - ETA: 0s - loss: 0.6295 - accuracy: 0.6643␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.6320 - accuracy: 0.6614
Epoch 4: val_loss improved from 0.63096 to 0.60434, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 11ms/step - loss: 0.6283 - accuracy: 0.6605 - val_loss: 0.6043 - val_accuracy: 0.6625
Epoch 5/20
1/26 [>.............................] - ETA: 0s - loss: 0.6122 - accuracy: 0.6800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
8/26 [========>.....................] - ETA: 0s - loss: 0.6262 - accuracy: 0.6463␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/26 [=================>............] - ETA: 0s - loss: 0.6092 - accuracy: 0.6687␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/26 [===========================>..] - ETA: 0s - loss: 0.6030 - accuracy: 0.6808
Epoch 5: val_loss improved from 0.60434 to 0.57825, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 10ms/step - loss: 0.6020 - accuracy: 0.6809 - val_loss: 0.5783 - val_accuracy: 0.6969
Epoch 6/20
1/26 [>.............................] - ETA: 0s - loss: 0.5942 - accuracy: 0.6900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/26 [=========>....................] - ETA: 0s - loss: 0.5739 - accuracy: 0.7111␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/26 [====================>.........] - ETA: 0s - loss: 0.5752 - accuracy: 0.7095
Epoch 6: val_loss improved from 0.57825 to 0.55883, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 9ms/step - loss: 0.5749 - accuracy: 0.7059 - val_loss: 0.5588 - val_accuracy: 0.7078
Epoch 7/20
1/26 [>.............................] - ETA: 0s - loss: 0.5109 - accuracy: 0.7400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/26 [===========>..................] - ETA: 0s - loss: 0.5547 - accuracy: 0.7182␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/26 [======================>.......] - ETA: 0s - loss: 0.5496 - accuracy: 0.7285
Epoch 7: val_loss improved from 0.55883 to 0.53209, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 9ms/step - loss: 0.5533 - accuracy: 0.7246 - val_loss: 0.5321 - val_accuracy: 0.7344
Epoch 8/20
1/26 [>.............................] - ETA: 0s - loss: 0.5941 - accuracy: 0.6400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/26 [=========>....................] - ETA: 0s - loss: 0.5474 - accuracy: 0.7189␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.5275 - accuracy: 0.7429␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/26 [===========================>..] - ETA: 0s - loss: 0.5284 - accuracy: 0.7440
Epoch 8: val_loss improved from 0.53209 to 0.51428, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 10ms/step - loss: 0.5282 - accuracy: 0.7441 - val_loss: 0.5143 - val_accuracy: 0.7484
Epoch 9/20
1/26 [>.............................] - ETA: 0s - loss: 0.5150 - accuracy: 0.7400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
7/26 [=======>......................] - ETA: 0s - loss: 0.5344 - accuracy: 0.7357␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.5192 - accuracy: 0.7573␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/26 [=======================>......] - ETA: 0s - loss: 0.5056 - accuracy: 0.7667
Epoch 9: val_loss improved from 0.51428 to 0.48733, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 11ms/step - loss: 0.5052 - accuracy: 0.7660 - val_loss: 0.4873 - val_accuracy: 0.7750
Epoch 10/20
1/26 [>.............................] - ETA: 0s - loss: 0.4458 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.5016 - accuracy: 0.7670␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
18/26 [===================>..........] - ETA: 0s - loss: 0.4854 - accuracy: 0.7811
Epoch 10: val_loss improved from 0.48733 to 0.47342, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 9ms/step - loss: 0.4850 - accuracy: 0.7746 - val_loss: 0.4734 - val_accuracy: 0.7859
Epoch 11/20
1/26 [>.............................] - ETA: 0s - loss: 0.5081 - accuracy: 0.7500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/26 [=========>....................] - ETA: 0s - loss: 0.4705 - accuracy: 0.7856␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/26 [=================>............] - ETA: 0s - loss: 0.4783 - accuracy: 0.7819␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/26 [==========================>...] - ETA: 0s - loss: 0.4710 - accuracy: 0.7862
Epoch 11: val_loss improved from 0.47342 to 0.47119, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 10ms/step - loss: 0.4685 - accuracy: 0.7871 - val_loss: 0.4712 - val_accuracy: 0.7859
Epoch 12/20
1/26 [>.............................] - ETA: 0s - loss: 0.4571 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/26 [=========>....................] - ETA: 0s - loss: 0.4335 - accuracy: 0.8133␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
18/26 [===================>..........] - ETA: 0s - loss: 0.4380 - accuracy: 0.8072
Epoch 12: val_loss improved from 0.47119 to 0.45791, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 10ms/step - loss: 0.4549 - accuracy: 0.7980 - val_loss: 0.4579 - val_accuracy: 0.7766
Epoch 13/20
1/26 [>.............................] - ETA: 0s - loss: 0.5437 - accuracy: 0.7200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.4525 - accuracy: 0.7920␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.4463 - accuracy: 0.7960␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
22/26 [========================>.....] - ETA: 0s - loss: 0.4454 - accuracy: 0.7968
Epoch 13: val_loss improved from 0.45791 to 0.45601, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 11ms/step - loss: 0.4457 - accuracy: 0.7969 - val_loss: 0.4560 - val_accuracy: 0.7750
Epoch 14/20
1/26 [>.............................] - ETA: 0s - loss: 0.4050 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
8/26 [========>.....................] - ETA: 0s - loss: 0.4055 - accuracy: 0.8225␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/26 [================>.............] - ETA: 0s - loss: 0.4231 - accuracy: 0.8100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
23/26 [=========================>....] - ETA: 0s - loss: 0.4328 - accuracy: 0.8061
Epoch 14: val_loss improved from 0.45601 to 0.44843, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 10ms/step - loss: 0.4337 - accuracy: 0.8047 - val_loss: 0.4484 - val_accuracy: 0.7875
Epoch 15/20
1/26 [>.............................] - ETA: 0s - loss: 0.4204 - accuracy: 0.8100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
8/26 [========>.....................] - ETA: 0s - loss: 0.4217 - accuracy: 0.8150␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/26 [=================>............] - ETA: 0s - loss: 0.4305 - accuracy: 0.8037␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/26 [==========================>...] - ETA: 0s - loss: 0.4250 - accuracy: 0.8067
Epoch 15: val_loss improved from 0.44843 to 0.44265, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 11ms/step - loss: 0.4256 - accuracy: 0.8066 - val_loss: 0.4426 - val_accuracy: 0.7969
Epoch 16/20
1/26 [>.............................] - ETA: 0s - loss: 0.3753 - accuracy: 0.8300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/26 [==========>...................] - ETA: 0s - loss: 0.4172 - accuracy: 0.8190␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/26 [====================>.........] - ETA: 0s - loss: 0.4094 - accuracy: 0.8205
Epoch 16: val_loss did not improve from 0.44265
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.4213 - accuracy: 0.8098 - val_loss: 0.4496 - val_accuracy: 0.7844
Epoch 17/20
1/26 [>.............................] - ETA: 0s - loss: 0.3768 - accuracy: 0.8400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/26 [=========>....................] - ETA: 0s - loss: 0.4047 - accuracy: 0.8211␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.3948 - accuracy: 0.8247␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - ETA: 0s - loss: 0.4172 - accuracy: 0.8121
Epoch 17: val_loss did not improve from 0.44265
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 8ms/step - loss: 0.4172 - accuracy: 0.8121 - val_loss: 0.4471 - val_accuracy: 0.7891
Epoch 18/20
1/26 [>.............................] - ETA: 0s - loss: 0.4717 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/26 [=========>....................] - ETA: 0s - loss: 0.4090 - accuracy: 0.8167␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.4118 - accuracy: 0.8176␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/26 [==========================>...] - ETA: 0s - loss: 0.4051 - accuracy: 0.8200
Epoch 18: val_loss improved from 0.44265 to 0.42915, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 10ms/step - loss: 0.4022 - accuracy: 0.8242 - val_loss: 0.4292 - val_accuracy: 0.8047
Epoch 19/20
1/26 [>.............................] - ETA: 0s - loss: 0.4015 - accuracy: 0.8600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/26 [=========>....................] - ETA: 0s - loss: 0.3826 - accuracy: 0.8478␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
18/26 [===================>..........] - ETA: 0s - loss: 0.3864 - accuracy: 0.8383␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - ETA: 0s - loss: 0.3916 - accuracy: 0.8313
Epoch 19: val_loss improved from 0.42915 to 0.42797, saving model to trained_model_LSTM.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 10ms/step - loss: 0.3916 - accuracy: 0.8313 - val_loss: 0.4280 - val_accuracy: 0.7937
Epoch 20/20
1/26 [>.............................] - ETA: 0s - loss: 0.3675 - accuracy: 0.8700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
8/26 [========>.....................] - ETA: 0s - loss: 0.3927 - accuracy: 0.8300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/26 [==================>...........] - ETA: 0s - loss: 0.4050 - accuracy: 0.8188␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - ETA: 0s - loss: 0.3948 - accuracy: 0.8293
Epoch 20: val_loss did not improve from 0.42797
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/26 [==============================] - 0s 9ms/step - loss: 0.3948 - accuracy: 0.8293 - val_loss: 0.4498 - val_accuracy: 0.7891
: Getting training history for item:0 name = 'loss'
: Getting training history for item:1 name = 'accuracy'
: Getting training history for item:2 name = 'val_loss'
: Getting training history for item:3 name = 'val_accuracy'
: Elapsed time for training with 3200 events: 7.6 sec
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: trained_model_LSTM.h5
PyKeras_LSTM : [dataset] : Evaluation of PyKeras_LSTM on training sample (3200 events)
: Elapsed time for evaluation of 3200 events: 0.3 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_PyKeras_LSTM.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_PyKeras_LSTM.class.C␛[0m
Factory : Training finished
:
Factory : Train method: BDTG for Classification
:
BDTG : #events: (reweighted) sig: 1600 bkg: 1600
: #events: (unweighted) sig: 1600 bkg: 1600
: Training 100 Decision Trees ... patience please
: Elapsed time for training with 3200 events: 1.69 sec
BDTG : [dataset] : Evaluation of BDTG on training sample (3200 events)
: Elapsed time for evaluation of 3200 events: 0.0193 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVAClassification_BDTG.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVAClassification_BDTG.class.C␛[0m
: data_RNN_CPU.root:/dataset/Method_BDT/BDTG
Factory : Training finished
:
: Ranking input variables (method specific)...
: No variable ranking supplied by classifier: TMVA_LSTM
: No variable ranking supplied by classifier: TMVA_DNN
: No variable ranking supplied by classifier: PyKeras_LSTM
BDTG : Ranking result (top variable is best ranked)
: --------------------------------------------
: Rank : Variable : Variable Importance
: --------------------------------------------
: 1 : vars_time8 : 2.327e-02
: 2 : vars_time8 : 2.320e-02
: 3 : vars_time9 : 2.192e-02
: 4 : vars_time9 : 2.073e-02
: 5 : vars_time7 : 2.027e-02
: 6 : vars_time8 : 1.907e-02
: 7 : vars_time6 : 1.891e-02
: 8 : vars_time6 : 1.758e-02
: 9 : vars_time9 : 1.732e-02
: 10 : vars_time8 : 1.713e-02
: 11 : vars_time7 : 1.690e-02
: 12 : vars_time7 : 1.689e-02
: 13 : vars_time6 : 1.630e-02
: 14 : vars_time9 : 1.611e-02
: 15 : vars_time7 : 1.565e-02
: 16 : vars_time9 : 1.552e-02
: 17 : vars_time7 : 1.537e-02
: 18 : vars_time5 : 1.523e-02
: 19 : vars_time6 : 1.501e-02
: 20 : vars_time5 : 1.479e-02
: 21 : vars_time6 : 1.467e-02
: 22 : vars_time8 : 1.463e-02
: 23 : vars_time0 : 1.330e-02
: 24 : vars_time1 : 1.309e-02
: 25 : vars_time8 : 1.290e-02
: 26 : vars_time6 : 1.274e-02
: 27 : vars_time5 : 1.271e-02
: 28 : vars_time7 : 1.243e-02
: 29 : vars_time6 : 1.235e-02
: 30 : vars_time0 : 1.178e-02
: 31 : vars_time6 : 1.128e-02
: 32 : vars_time9 : 1.061e-02
: 33 : vars_time5 : 1.058e-02
: 34 : vars_time7 : 1.050e-02
: 35 : vars_time7 : 1.013e-02
: 36 : vars_time8 : 9.930e-03
: 37 : vars_time5 : 9.705e-03
: 38 : vars_time0 : 9.502e-03
: 39 : vars_time9 : 9.456e-03
: 40 : vars_time9 : 9.441e-03
: 41 : vars_time6 : 9.340e-03
: 42 : vars_time3 : 9.331e-03
: 43 : vars_time5 : 9.299e-03
: 44 : vars_time1 : 9.284e-03
: 45 : vars_time9 : 9.086e-03
: 46 : vars_time7 : 8.823e-03
: 47 : vars_time5 : 8.816e-03
: 48 : vars_time4 : 8.691e-03
: 49 : vars_time4 : 8.421e-03
: 50 : vars_time9 : 8.172e-03
: 51 : vars_time7 : 7.828e-03
: 52 : vars_time0 : 7.774e-03
: 53 : vars_time6 : 7.653e-03
: 54 : vars_time9 : 7.600e-03
: 55 : vars_time9 : 7.253e-03
: 56 : vars_time9 : 7.251e-03
: 57 : vars_time1 : 7.181e-03
: 58 : vars_time9 : 7.093e-03
: 59 : vars_time8 : 7.066e-03
: 60 : vars_time9 : 7.052e-03
: 61 : vars_time7 : 6.999e-03
: 62 : vars_time7 : 6.983e-03
: 63 : vars_time3 : 6.673e-03
: 64 : vars_time0 : 6.589e-03
: 65 : vars_time4 : 6.497e-03
: 66 : vars_time7 : 6.468e-03
: 67 : vars_time5 : 6.444e-03
: 68 : vars_time6 : 6.342e-03
: 69 : vars_time0 : 6.263e-03
: 70 : vars_time3 : 6.220e-03
: 71 : vars_time7 : 6.170e-03
: 72 : vars_time6 : 6.031e-03
: 73 : vars_time1 : 6.026e-03
: 74 : vars_time7 : 6.026e-03
: 75 : vars_time5 : 6.018e-03
: 76 : vars_time7 : 5.978e-03
: 77 : vars_time8 : 5.958e-03
: 78 : vars_time0 : 5.947e-03
: 79 : vars_time1 : 5.830e-03
: 80 : vars_time5 : 5.822e-03
: 81 : vars_time8 : 5.782e-03
: 82 : vars_time9 : 5.777e-03
: 83 : vars_time1 : 5.667e-03
: 84 : vars_time8 : 5.616e-03
: 85 : vars_time2 : 5.605e-03
: 86 : vars_time9 : 5.537e-03
: 87 : vars_time4 : 5.460e-03
: 88 : vars_time7 : 5.394e-03
: 89 : vars_time0 : 5.388e-03
: 90 : vars_time8 : 5.384e-03
: 91 : vars_time8 : 5.266e-03
: 92 : vars_time0 : 5.237e-03
: 93 : vars_time3 : 4.967e-03
: 94 : vars_time1 : 4.574e-03
: 95 : vars_time9 : 4.509e-03
: 96 : vars_time3 : 4.490e-03
: 97 : vars_time8 : 4.411e-03
: 98 : vars_time1 : 4.289e-03
: 99 : vars_time0 : 4.201e-03
: 100 : vars_time6 : 3.827e-03
: 101 : vars_time0 : 3.719e-03
: 102 : vars_time2 : 3.718e-03
: 103 : vars_time5 : 3.706e-03
: 104 : vars_time2 : 3.658e-03
: 105 : vars_time5 : 3.528e-03
: 106 : vars_time1 : 3.068e-03
: 107 : vars_time0 : 0.000e+00
: 108 : vars_time0 : 0.000e+00
: 109 : vars_time0 : 0.000e+00
: 110 : vars_time0 : 0.000e+00
: 111 : vars_time0 : 0.000e+00
: 112 : vars_time0 : 0.000e+00
: 113 : vars_time0 : 0.000e+00
: 114 : vars_time0 : 0.000e+00
: 115 : vars_time0 : 0.000e+00
: 116 : vars_time0 : 0.000e+00
: 117 : vars_time0 : 0.000e+00
: 118 : vars_time0 : 0.000e+00
: 119 : vars_time0 : 0.000e+00
: 120 : vars_time0 : 0.000e+00
: 121 : vars_time0 : 0.000e+00
: 122 : vars_time0 : 0.000e+00
: 123 : vars_time0 : 0.000e+00
: 124 : vars_time0 : 0.000e+00
: 125 : vars_time0 : 0.000e+00
: 126 : vars_time1 : 0.000e+00
: 127 : vars_time1 : 0.000e+00
: 128 : vars_time1 : 0.000e+00
: 129 : vars_time1 : 0.000e+00
: 130 : vars_time1 : 0.000e+00
: 131 : vars_time1 : 0.000e+00
: 132 : vars_time1 : 0.000e+00
: 133 : vars_time1 : 0.000e+00
: 134 : vars_time1 : 0.000e+00
: 135 : vars_time1 : 0.000e+00
: 136 : vars_time1 : 0.000e+00
: 137 : vars_time1 : 0.000e+00
: 138 : vars_time1 : 0.000e+00
: 139 : vars_time1 : 0.000e+00
: 140 : vars_time1 : 0.000e+00
: 141 : vars_time1 : 0.000e+00
: 142 : vars_time1 : 0.000e+00
: 143 : vars_time1 : 0.000e+00
: 144 : vars_time1 : 0.000e+00
: 145 : vars_time1 : 0.000e+00
: 146 : vars_time1 : 0.000e+00
: 147 : vars_time2 : 0.000e+00
: 148 : vars_time2 : 0.000e+00
: 149 : vars_time2 : 0.000e+00
: 150 : vars_time2 : 0.000e+00
: 151 : vars_time2 : 0.000e+00
: 152 : vars_time2 : 0.000e+00
: 153 : vars_time2 : 0.000e+00
: 154 : vars_time2 : 0.000e+00
: 155 : vars_time2 : 0.000e+00
: 156 : vars_time2 : 0.000e+00
: 157 : vars_time2 : 0.000e+00
: 158 : vars_time2 : 0.000e+00
: 159 : vars_time2 : 0.000e+00
: 160 : vars_time2 : 0.000e+00
: 161 : vars_time2 : 0.000e+00
: 162 : vars_time2 : 0.000e+00
: 163 : vars_time2 : 0.000e+00
: 164 : vars_time2 : 0.000e+00
: 165 : vars_time2 : 0.000e+00
: 166 : vars_time2 : 0.000e+00
: 167 : vars_time2 : 0.000e+00
: 168 : vars_time2 : 0.000e+00
: 169 : vars_time2 : 0.000e+00
: 170 : vars_time2 : 0.000e+00
: 171 : vars_time2 : 0.000e+00
: 172 : vars_time2 : 0.000e+00
: 173 : vars_time2 : 0.000e+00
: 174 : vars_time3 : 0.000e+00
: 175 : vars_time3 : 0.000e+00
: 176 : vars_time3 : 0.000e+00
: 177 : vars_time3 : 0.000e+00
: 178 : vars_time3 : 0.000e+00
: 179 : vars_time3 : 0.000e+00
: 180 : vars_time3 : 0.000e+00
: 181 : vars_time3 : 0.000e+00
: 182 : vars_time3 : 0.000e+00
: 183 : vars_time3 : 0.000e+00
: 184 : vars_time3 : 0.000e+00
: 185 : vars_time3 : 0.000e+00
: 186 : vars_time3 : 0.000e+00
: 187 : vars_time3 : 0.000e+00
: 188 : vars_time3 : 0.000e+00
: 189 : vars_time3 : 0.000e+00
: 190 : vars_time3 : 0.000e+00
: 191 : vars_time3 : 0.000e+00
: 192 : vars_time3 : 0.000e+00
: 193 : vars_time3 : 0.000e+00
: 194 : vars_time3 : 0.000e+00
: 195 : vars_time3 : 0.000e+00
: 196 : vars_time3 : 0.000e+00
: 197 : vars_time3 : 0.000e+00
: 198 : vars_time3 : 0.000e+00
: 199 : vars_time4 : 0.000e+00
: 200 : vars_time4 : 0.000e+00
: 201 : vars_time4 : 0.000e+00
: 202 : vars_time4 : 0.000e+00
: 203 : vars_time4 : 0.000e+00
: 204 : vars_time4 : 0.000e+00
: 205 : vars_time4 : 0.000e+00
: 206 : vars_time4 : 0.000e+00
: 207 : vars_time4 : 0.000e+00
: 208 : vars_time4 : 0.000e+00
: 209 : vars_time4 : 0.000e+00
: 210 : vars_time4 : 0.000e+00
: 211 : vars_time4 : 0.000e+00
: 212 : vars_time4 : 0.000e+00
: 213 : vars_time4 : 0.000e+00
: 214 : vars_time4 : 0.000e+00
: 215 : vars_time4 : 0.000e+00
: 216 : vars_time4 : 0.000e+00
: 217 : vars_time4 : 0.000e+00
: 218 : vars_time4 : 0.000e+00
: 219 : vars_time4 : 0.000e+00
: 220 : vars_time4 : 0.000e+00
: 221 : vars_time4 : 0.000e+00
: 222 : vars_time4 : 0.000e+00
: 223 : vars_time4 : 0.000e+00
: 224 : vars_time4 : 0.000e+00
: 225 : vars_time5 : 0.000e+00
: 226 : vars_time5 : 0.000e+00
: 227 : vars_time5 : 0.000e+00
: 228 : vars_time5 : 0.000e+00
: 229 : vars_time5 : 0.000e+00
: 230 : vars_time5 : 0.000e+00
: 231 : vars_time5 : 0.000e+00
: 232 : vars_time5 : 0.000e+00
: 233 : vars_time5 : 0.000e+00
: 234 : vars_time5 : 0.000e+00
: 235 : vars_time5 : 0.000e+00
: 236 : vars_time5 : 0.000e+00
: 237 : vars_time5 : 0.000e+00
: 238 : vars_time5 : 0.000e+00
: 239 : vars_time5 : 0.000e+00
: 240 : vars_time5 : 0.000e+00
: 241 : vars_time5 : 0.000e+00
: 242 : vars_time5 : 0.000e+00
: 243 : vars_time6 : 0.000e+00
: 244 : vars_time6 : 0.000e+00
: 245 : vars_time6 : 0.000e+00
: 246 : vars_time6 : 0.000e+00
: 247 : vars_time6 : 0.000e+00
: 248 : vars_time6 : 0.000e+00
: 249 : vars_time6 : 0.000e+00
: 250 : vars_time6 : 0.000e+00
: 251 : vars_time6 : 0.000e+00
: 252 : vars_time6 : 0.000e+00
: 253 : vars_time6 : 0.000e+00
: 254 : vars_time6 : 0.000e+00
: 255 : vars_time6 : 0.000e+00
: 256 : vars_time6 : 0.000e+00
: 257 : vars_time6 : 0.000e+00
: 258 : vars_time6 : 0.000e+00
: 259 : vars_time6 : 0.000e+00
: 260 : vars_time7 : 0.000e+00
: 261 : vars_time7 : 0.000e+00
: 262 : vars_time7 : 0.000e+00
: 263 : vars_time7 : 0.000e+00
: 264 : vars_time7 : 0.000e+00
: 265 : vars_time7 : 0.000e+00
: 266 : vars_time7 : 0.000e+00
: 267 : vars_time7 : 0.000e+00
: 268 : vars_time7 : 0.000e+00
: 269 : vars_time7 : 0.000e+00
: 270 : vars_time7 : 0.000e+00
: 271 : vars_time7 : 0.000e+00
: 272 : vars_time7 : 0.000e+00
: 273 : vars_time8 : 0.000e+00
: 274 : vars_time8 : 0.000e+00
: 275 : vars_time8 : 0.000e+00
: 276 : vars_time8 : 0.000e+00
: 277 : vars_time8 : 0.000e+00
: 278 : vars_time8 : 0.000e+00
: 279 : vars_time8 : 0.000e+00
: 280 : vars_time8 : 0.000e+00
: 281 : vars_time8 : 0.000e+00
: 282 : vars_time8 : 0.000e+00
: 283 : vars_time8 : 0.000e+00
: 284 : vars_time8 : 0.000e+00
: 285 : vars_time8 : 0.000e+00
: 286 : vars_time8 : 0.000e+00
: 287 : vars_time8 : 0.000e+00
: 288 : vars_time8 : 0.000e+00
: 289 : vars_time9 : 0.000e+00
: 290 : vars_time9 : 0.000e+00
: 291 : vars_time9 : 0.000e+00
: 292 : vars_time9 : 0.000e+00
: 293 : vars_time9 : 0.000e+00
: 294 : vars_time9 : 0.000e+00
: 295 : vars_time9 : 0.000e+00
: 296 : vars_time9 : 0.000e+00
: 297 : vars_time9 : 0.000e+00
: 298 : vars_time9 : 0.000e+00
: 299 : vars_time9 : 0.000e+00
: 300 : vars_time9 : 0.000e+00
: --------------------------------------------
TH1.Print Name = TrainingHistory_TMVA_LSTM_trainingError, Entries= 0, Total sum= 12.0369
TH1.Print Name = TrainingHistory_TMVA_LSTM_valError, Entries= 0, Total sum= 12.6342
TH1.Print Name = TrainingHistory_TMVA_DNN_trainingError, Entries= 0, Total sum= 13.2616
TH1.Print Name = TrainingHistory_TMVA_DNN_valError, Entries= 0, Total sum= 13.3551
TH1.Print Name = TrainingHistory_PyKeras_LSTM_'accuracy', Entries= 0, Total sum= 14.893
TH1.Print Name = TrainingHistory_PyKeras_LSTM_'loss', Entries= 0, Total sum= 10.1672
TH1.Print Name = TrainingHistory_PyKeras_LSTM_'val_accuracy', Entries= 0, Total sum= 14.8281
TH1.Print Name = TrainingHistory_PyKeras_LSTM_'val_loss', Entries= 0, Total sum= 10.2018
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_LSTM.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_TMVA_DNN.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_PyKeras_LSTM.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVAClassification_BDTG.weights.xml␛[0m
nthreads = 4
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: TMVA_LSTM for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 800
:
TMVA_LSTM : [dataset] : Evaluation of TMVA_LSTM on testing sample (800 events)
: Elapsed time for evaluation of 800 events: 0.0488 sec
Factory : Test method: TMVA_DNN for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 800
:
TMVA_DNN : [dataset] : Evaluation of TMVA_DNN on testing sample (800 events)
: Elapsed time for evaluation of 800 events: 0.021 sec
Factory : Test method: PyKeras_LSTM for Classification performance
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Disabled TF eager execution when evaluating model
: Loading Keras Model
: Loaded model from file: trained_model_LSTM.h5
PyKeras_LSTM : [dataset] : Evaluation of PyKeras_LSTM on testing sample (800 events)
: Elapsed time for evaluation of 800 events: 0.235 sec
Factory : Test method: BDTG for Classification performance
:
BDTG : [dataset] : Evaluation of BDTG on testing sample (800 events)
: Elapsed time for evaluation of 800 events: 0.00573 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: TMVA_LSTM
:
TMVA_LSTM : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 300 , it is larger than 200
Factory : Evaluate classifier: TMVA_DNN
:
TMVA_DNN : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 300 , it is larger than 200
Factory : Evaluate classifier: PyKeras_LSTM
:
PyKeras_LSTM : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 300 , it is larger than 200
Factory : Evaluate classifier: BDTG
:
BDTG : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 300 , it is larger than 200
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset PyKeras_LSTM : 0.863
: dataset BDTG : 0.835
: dataset TMVA_LSTM : 0.758
: dataset TMVA_DNN : 0.688
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset PyKeras_LSTM : 0.181 (0.310) 0.610 (0.668) 0.860 (0.891)
: dataset BDTG : 0.130 (0.325) 0.528 (0.670) 0.818 (0.889)
: dataset TMVA_LSTM : 0.075 (0.165) 0.345 (0.485) 0.705 (0.758)
: dataset TMVA_DNN : 0.039 (0.044) 0.262 (0.242) 0.559 (0.565)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 800 events
:
Dataset:dataset : Created tree 'TrainTree' with 3200 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m
/***
# TMVA Classification Example Using a Recurrent Neural Network
This is an example of using a RNN in TMVA.
We do the classification using a toy data set containing a time series of data sample ntimes
and with dimension ndim that is generated when running the provided function `MakeTimeData (nevents, ntime, ndim)`
**/
#include<TROOT.h>
#include "TMVA/Factory.h"
#include "TMVA/Config.h"
#include "TMVA/MethodDL.h"
#include "TFile.h"
#include "TTree.h"
/// Helper function to generate the time data set
/// make some time data but not of fixed length.
/// use a poisson with mu = 5 and truncated at 10
///
void MakeTimeData(int n, int ntime, int ndim )
{
// const int ntime = 10;
// const int ndim = 30; // number of dim/time
TString fname = TString::Format("time_data_t%d_d%d.root", ntime, ndim);
std::vector<TH1 *> v1(ntime);
std::vector<TH1 *> v2(ntime);
int i = 0;
for (int i = 0; i < ntime; ++i) {
v1[i] = new TH1D(TString::Format("h1_%d", i), "h1", ndim, 0, 10);
v2[i] = new TH1D(TString::Format("h2_%d", i), "h2", ndim, 0, 10);
}
auto f1 = new TF1("f1", "gaus");
auto f2 = new TF1("f2", "gaus");
TFile f(fname, "RECREATE");
TTree sgn("sgn", "sgn");
TTree bkg("bkg", "bkg");
std::vector<std::vector<float>> x1(ntime);
std::vector<std::vector<float>> x2(ntime);
for (int i = 0; i < ntime; ++i) {
x1[i] = std::vector<float>(ndim);
x2[i] = std::vector<float>(ndim);
}
for (auto i = 0; i < ntime; i++) {
bkg.Branch(Form("vars_time%d", i), "std::vector<float>", &x1[i]);
sgn.Branch(Form("vars_time%d", i), "std::vector<float>", &x2[i]);
}
sgn.SetDirectory(&f);
bkg.SetDirectory(&f);
std::vector<double> mean1(ntime);
std::vector<double> mean2(ntime);
std::vector<double> sigma1(ntime);
std::vector<double> sigma2(ntime);
for (int j = 0; j < ntime; ++j) {
mean1[j] = 5. + 0.2 * sin(TMath::Pi() * j / double(ntime));
mean2[j] = 5. + 0.2 * cos(TMath::Pi() * j / double(ntime));
sigma1[j] = 4 + 0.3 * sin(TMath::Pi() * j / double(ntime));
sigma2[j] = 4 + 0.3 * cos(TMath::Pi() * j / double(ntime));
}
for (int i = 0; i < n; ++i) {
if (i % 1000 == 0)
std::cout << "Generating event ... " << i << std::endl;
for (int j = 0; j < ntime; ++j) {
auto h1 = v1[j];
auto h2 = v2[j];
h1->Reset();
h2->Reset();
f1->SetParameters(1, mean1[j], sigma1[j]);
f2->SetParameters(1, mean2[j], sigma2[j]);
h1->FillRandom("f1", 1000);
h2->FillRandom("f2", 1000);
for (int k = 0; k < ndim; ++k) {
// std::cout << j*10+k << " ";
x1[j][k] = h1->GetBinContent(k + 1) + gRandom->Gaus(0, 10);
x2[j][k] = h2->GetBinContent(k + 1) + gRandom->Gaus(0, 10);
}
}
// std::cout << std::endl;
sgn.Fill();
bkg.Fill();
if (n == 1) {
auto c1 = new TCanvas();
c1->Divide(ntime, 2);
for (int j = 0; j < ntime; ++j) {
c1->cd(j + 1);
v1[j]->Draw();
}
for (int j = 0; j < ntime; ++j) {
c1->cd(ntime + j + 1);
v2[j]->Draw();
}
gPad->Update();
}
}
if (n > 1) {
sgn.Write();
bkg.Write();
sgn.Print();
bkg.Print();
f.Close();
}
}
/// macro for performing a classification using a Recurrent Neural Network
/// @param nevts = 2000 Number of events used. (increase for better classification results)
/// @param use_type
/// use_type = 0 use Simple RNN network
/// use_type = 1 use LSTM network
/// use_type = 2 use GRU
/// use_type = 3 build 3 different networks with RNN, LSTM and GRU
void TMVA_RNN_Classification(int nevts = 2000, int use_type = 1)
{
const int ninput = 30;
const int ntime = 10;
const int batchSize = 100;
const int maxepochs = 20;
int nTotEvts = nevts; // total events to be generated for signal or background
bool useKeras = true;
bool useTMVA_RNN = true;
bool useTMVA_DNN = true;
bool useTMVA_BDT = false;
std::vector<std::string> rnn_types = {"RNN", "LSTM", "GRU"};
std::vector<bool> use_rnn_type = {1, 1, 1};
if (use_type >=0 && use_type < 3) {
use_rnn_type = {0,0,0};
use_rnn_type[use_type] = 1;
}
bool useGPU = true; // use GPU for TMVA if available
#ifndef R__HAS_TMVAGPU
useGPU = false;
#ifndef R__HAS_TMVACPU
Warning("TMVA_RNN_Classification", "TMVA is not build with GPU or CPU multi-thread support. Cannot use TMVA Deep Learning for RNN");
useTMVA_RNN = false;
#endif
#endif
TString archString = (useGPU) ? "GPU" : "CPU";
bool writeOutputFile = true;
const char *rnn_type = "RNN";
#ifdef R__HAS_PYMVA
#else
useKeras = false;
#endif
int num_threads = 4; // use by default all threads
gSystem->Setenv("OMP_NUM_THREADS", "1"); // switch off MT in OpenBLAS
// do enable MT running
if (num_threads >= 0) {
ROOT::EnableImplicitMT(num_threads);
}
std::cout << "Running with nthreads = " << ROOT::GetThreadPoolSize() << std::endl;
TString inputFileName = "time_data_t10_d30.root";
bool fileExist = !gSystem->AccessPathName(inputFileName);
// if file does not exists create it
if (!fileExist) {
MakeTimeData(nTotEvts,ntime, ninput);
}
auto inputFile = TFile::Open(inputFileName);
if (!inputFile) {
Error("TMVA_RNN_Classification", "Error opening input file %s - exit", inputFileName.Data());
return;
}
std::cout << "--- RNNClassification : Using input file: " << inputFile->GetName() << std::endl;
// Create a ROOT output file where TMVA will store ntuples, histograms, etc.
TString outfileName(TString::Format("data_RNN_%s.root", archString.Data()));
TFile *outputFile = nullptr;
if (writeOutputFile) outputFile = TFile::Open(outfileName, "RECREATE");
/**
## Declare Factory
Create the Factory class. Later you can choose the methods
whose performance you'd like to investigate.
The factory is the major TMVA object you have to interact with. Here is the list of parameters you need to
pass
- The first argument is the base of the name of all the output
weightfiles in the directory weight/ that will be created with the
method parameters
- The second argument is the output file for the training results
- The third argument is a string option defining some general configuration for the TMVA session.
For example all TMVA output can be suppressed by removing the "!" (not) in front of the "Silent" argument in
the option string
**/
// Creating the factory object
TMVA::Factory *factory = new TMVA::Factory("TMVAClassification", outputFile,
"!V:!Silent:Color:DrawProgressBar:Transformations=None:!Correlations:"
"AnalysisType=Classification:ModelPersistence");
TMVA::DataLoader *dataloader = new TMVA::DataLoader("dataset");
TTree *signalTree = (TTree *)inputFile->Get("sgn");
TTree *background = (TTree *)inputFile->Get("bkg");
const int nvar = ninput * ntime;
/// add variables - use new AddVariablesArray function
for (auto i = 0; i < ntime; i++) {
dataloader->AddVariablesArray(Form("vars_time%d", i), ninput);
}
dataloader->AddSignalTree(signalTree, 1.0);
dataloader->AddBackgroundTree(background, 1.0);
// check given input
auto &datainfo = dataloader->GetDataSetInfo();
auto vars = datainfo.GetListOfVariables();
std::cout << "number of variables is " << vars.size() << std::endl;
for (auto &v : vars)
std::cout << v << ",";
std::cout << std::endl;
int nTrainSig = 0.8 * nTotEvts;
int nTrainBkg = 0.8 * nTotEvts;
// build the string options for DataLoader::PrepareTrainingAndTestTree
TString prepareOptions = TString::Format("nTrain_Signal=%d:nTrain_Background=%d:SplitMode=Random:SplitSeed=100:NormMode=NumEvents:!V:!CalcCorrelations", nTrainSig, nTrainBkg);
// Apply additional cuts on the signal and background samples (can be different)
TCut mycuts = ""; // for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1";
TCut mycutb = "";
dataloader->PrepareTrainingAndTestTree(mycuts, mycutb, prepareOptions);
std::cout << "prepared DATA LOADER " << std::endl;
/**
## Book TMVA recurrent models
Book the different types of recurrent models in TMVA (SimpleRNN, LSTM or GRU)
**/
if (useTMVA_RNN) {
for (int i = 0; i < 3; ++i) {
if (!use_rnn_type[i])
continue;
const char *rnn_type = rnn_types[i].c_str();
/// define the inputlayout string for RNN
/// the input data should be organize as following:
//// input layout for RNN: time x ndim
TString inputLayoutString = TString::Format("InputLayout=%d|%d", ntime, ninput);
/// Define RNN layer layout
/// it should be LayerType (RNN or LSTM or GRU) | number of units | number of inputs | time steps | remember output (typically no=0 | return full sequence
TString rnnLayout = TString::Format("%s|10|%d|%d|0|1", rnn_type, ninput, ntime);
/// add after RNN a reshape layer (needed top flatten the output) and a dense layer with 64 units and a last one
/// Note the last layer is linear because when using Crossentropy a Sigmoid is applied already
TString layoutString = TString("Layout=") + rnnLayout + TString(",RESHAPE|FLAT,DENSE|64|TANH,LINEAR");
/// Defining Training strategies. Different training strings can be concatenate. Use however only one
TString trainingString1 = TString::Format("LearningRate=1e-3,Momentum=0.0,Repetitions=1,"
"ConvergenceSteps=5,BatchSize=%d,TestRepetitions=1,"
"WeightDecay=1e-2,Regularization=None,MaxEpochs=%d,"
"Optimizer=ADAM,DropConfig=0.0+0.+0.+0.",
batchSize,maxepochs);
TString trainingStrategyString("TrainingStrategy=");
trainingStrategyString += trainingString1; // + "|" + trainingString2
/// Define the full RNN Noption string adding the final options for all network
TString rnnOptions("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:"
"WeightInitialization=XAVIERUNIFORM:ValidationSize=0.2:RandomSeed=1234");
rnnOptions.Append(":");
rnnOptions.Append(inputLayoutString);
rnnOptions.Append(":");
rnnOptions.Append(layoutString);
rnnOptions.Append(":");
rnnOptions.Append(trainingStrategyString);
rnnOptions.Append(":");
rnnOptions.Append(TString::Format("Architecture=%s", archString.Data()));
TString rnnName = "TMVA_" + TString(rnn_type);
factory->BookMethod(dataloader, TMVA::Types::kDL, rnnName, rnnOptions);
}
}
/**
## Book TMVA fully connected dense layer models
**/
if (useTMVA_DNN) {
// Method DL with Dense Layer
TString inputLayoutString = TString::Format("InputLayout=1|1|%d", ntime * ninput);
TString layoutString("Layout=DENSE|64|TANH,DENSE|TANH|64,DENSE|TANH|64,LINEAR");
// Training strategies.
TString trainingString1("LearningRate=1e-3,Momentum=0.0,Repetitions=1,"
"ConvergenceSteps=10,BatchSize=256,TestRepetitions=1,"
"WeightDecay=1e-4,Regularization=None,MaxEpochs=20"
"DropConfig=0.0+0.+0.+0.,Optimizer=ADAM");
TString trainingStrategyString("TrainingStrategy=");
trainingStrategyString += trainingString1; // + "|" + trainingString2
// General Options.
TString dnnOptions("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:"
"WeightInitialization=XAVIER:RandomSeed=0");
dnnOptions.Append(":");
dnnOptions.Append(inputLayoutString);
dnnOptions.Append(":");
dnnOptions.Append(layoutString);
dnnOptions.Append(":");
dnnOptions.Append(trainingStrategyString);
dnnOptions.Append(":");
dnnOptions.Append(archString);
TString dnnName = "TMVA_DNN";
factory->BookMethod(dataloader, TMVA::Types::kDL, dnnName, dnnOptions);
}
/**
## Book Keras recurrent models
Book the different types of recurrent models in Keras (SimpleRNN, LSTM or GRU)
**/
if (useKeras) {
for (int i = 0; i < 3; i++) {
if (use_rnn_type[i]) {
TString modelName = TString::Format("model_%s.h5", rnn_types[i].c_str());
TString trainedModelName = TString::Format("trained_model_%s.h5", rnn_types[i].c_str());
Info("TMVA_RNN_Classification", "Building recurrent keras model using a %s layer", rnn_types[i].c_str());
// create python script which can be executed
// create 2 conv2d layer + maxpool + dense
m.AddLine("import tensorflow");
m.AddLine("from tensorflow.keras.models import Sequential");
m.AddLine("from tensorflow.keras.optimizers import Adam");
m.AddLine("from tensorflow.keras.layers import Input, Dense, Dropout, Flatten, SimpleRNN, GRU, LSTM, Reshape, "
"BatchNormalization");
m.AddLine("");
m.AddLine("model = Sequential() ");
m.AddLine("model.add(Reshape((10, 30), input_shape = (10*30, )))");
// add recurrent neural network depending on type / Use option to return the full output
if (rnn_types[i] == "LSTM")
m.AddLine("model.add(LSTM(units=10, return_sequences=True) )");
else if (rnn_types[i] == "GRU")
m.AddLine("model.add(GRU(units=10, return_sequences=True) )");
else
m.AddLine("model.add(SimpleRNN(units=10, return_sequences=True) )");
// m.AddLine("model.add(BatchNormalization())");
m.AddLine("model.add(Flatten())"); // needed if returning the full time output sequence
m.AddLine("model.add(Dense(64, activation = 'tanh')) ");
m.AddLine("model.add(Dense(2, activation = 'sigmoid')) ");
m.AddLine(
"model.compile(loss = 'binary_crossentropy', optimizer = Adam(learning_rate = 0.001), weighted_metrics = ['accuracy'])");
m.AddLine(TString::Format("modelName = '%s'", modelName.Data()));
m.AddLine("model.save(modelName)");
m.AddLine("model.summary()");
m.SaveSource("make_rnn_model.py");
// execute python script to make the model
auto ret = (TString *)gROOT->ProcessLine("TMVA::Python_Executable()");
TString python_exe = (ret) ? *(ret) : "python";
gSystem->Exec(python_exe + " make_rnn_model.py");
if (gSystem->AccessPathName(modelName)) {
Warning("TMVA_RNN_Classification", "Error creating Keras recurrent model file - Skip using Keras");
useKeras = false;
} else {
// book PyKeras method only if Keras model could be created
Info("TMVA_RNN_Classification", "Booking Keras %s model", rnn_types[i].c_str());
factory->BookMethod(dataloader, TMVA::Types::kPyKeras,
TString::Format("PyKeras_%s", rnn_types[i].c_str()),
TString::Format("!H:!V:VarTransform=None:FilenameModel=%s:tf.keras:"
"FilenameTrainedModel=%s:GpuOptions=allow_growth=True:"
"NumEpochs=%d:BatchSize=%d",
modelName.Data(), trainedModelName.Data(), maxepochs, batchSize));
}
}
}
}
// use BDT in case not using Keras or TMVA DL
if (!useKeras || !useTMVA_BDT)
useTMVA_BDT = true;
/**
## Book TMVA BDT
**/
if (useTMVA_BDT) {
factory->BookMethod(dataloader, TMVA::Types::kBDT, "BDTG",
"!H:!V:NTrees=100:MinNodeSize=2.5%:BoostType=Grad:Shrinkage=0.10:UseBaggedBoost:"
"BaggedSampleFraction=0.5:nCuts=20:"
"MaxDepth=2");
}
/// Train all methods
factory->TrainAllMethods();
std::cout << "nthreads = " << ROOT::GetThreadPoolSize() << std::endl;
// ---- Evaluate all MVAs using the set of test events
factory->TestAllMethods();
// ----- Evaluate and compare performance of all configured MVAs
factory->EvaluateAllMethods();
// check method
// plot ROC curve
auto c1 = factory->GetROCCurve(dataloader);
c1->Draw();
if (outputFile) outputFile->Close();
}
#define f(i)
Definition RSha256.hxx:104
void Info(const char *location, const char *msgfmt,...)
Use this function for informational messages.
Definition TError.cxx:218
void Error(const char *location, const char *msgfmt,...)
Use this function in case an error occurred.
Definition TError.cxx:185
void Warning(const char *location, const char *msgfmt,...)
Use this function in warning situations.
Definition TError.cxx:229
Option_t Option_t TPoint TPoint const char x2
Option_t Option_t TPoint TPoint const char x1
#define gROOT
Definition TROOT.h:407
R__EXTERN TRandom * gRandom
Definition TRandom.h:62
char * Form(const char *fmt,...)
Formats a string in a circular formatting buffer.
Definition TString.cxx:2467
R__EXTERN TSystem * gSystem
Definition TSystem.h:560
#define gPad
The Canvas class.
Definition TCanvas.h:23
A specialized string object used for TTree selections.
Definition TCut.h:25
1-Dim function class
Definition TF1.h:214
virtual void SetParameters(const Double_t *params)
Definition TF1.h:650
A ROOT file is composed of a header, followed by consecutive data records (TKey instances) with a wel...
Definition TFile.h:53
static TFile * Open(const char *name, Option_t *option="", const char *ftitle="", Int_t compress=ROOT::RCompressionSetting::EDefaults::kUseCompiledDefault, Int_t netopt=0)
Create / open a file.
Definition TFile.cxx:4075
void Close(Option_t *option="") override
Close a file.
Definition TFile.cxx:936
void Draw(Option_t *chopt="") override
Draw this graph with its current attributes.
Definition TGraph.cxx:809
1-D histogram with a double per channel (see TH1 documentation)}
Definition TH1.h:620
void Reset(Option_t *option="") override
Reset.
Definition TH1.cxx:10030
virtual void FillRandom(const char *fname, Int_t ntimes=5000, TRandom *rng=nullptr)
Fill histogram following distribution in function fname.
Definition TH1.cxx:3520
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition TH1.cxx:5032
static Config & Instance()
static function: returns TMVA instance
Definition Config.cxx:98
void AddVariablesArray(const TString &expression, int size, char type='F', Double_t min=0, Double_t max=0)
user inserts discriminating array of variables in data set info in case input tree provides an array ...
void AddSignalTree(TTree *signal, Double_t weight=1.0, Types::ETreeType treetype=Types::kMaxTreeType)
number of signal events (used to compute significance)
void PrepareTrainingAndTestTree(const TCut &cut, const TString &splitOpt)
prepare the training and test trees -> same cuts for signal and background
void AddBackgroundTree(TTree *background, Double_t weight=1.0, Types::ETreeType treetype=Types::kMaxTreeType)
number of signal events (used to compute significance)
DataSetInfo & GetDataSetInfo()
std::vector< TString > GetListOfVariables() const
returns list of variables
This is the main MVA steering class.
Definition Factory.h:80
void TrainAllMethods()
Iterates through all booked methods and calls training.
Definition Factory.cxx:1114
MethodBase * BookMethod(DataLoader *loader, TString theMethodName, TString methodTitle, TString theOption="")
Book a classifier or regression method.
Definition Factory.cxx:352
void TestAllMethods()
Evaluates all booked methods on the testing data and adds the output to the Results in the corresponi...
Definition Factory.cxx:1271
void EvaluateAllMethods(void)
Iterates over all MVAs that have been booked, and calls their evaluation methods.
Definition Factory.cxx:1376
TGraph * GetROCCurve(DataLoader *loader, TString theMethodName, Bool_t setTitles=kTRUE, UInt_t iClass=0, Types::ETreeType type=Types::kTesting)
Argument iClass specifies the class to generate the ROC curve in a multiclass setting.
Definition Factory.cxx:912
static void PyInitialize()
Initialize Python interpreter.
Class supporting a collection of lines with C++ code.
Definition TMacro.h:31
virtual Double_t Gaus(Double_t mean=0, Double_t sigma=1)
Samples a random number from the standard Normal (Gaussian) Distribution with the given mean and sigm...
Definition TRandom.cxx:274
virtual void SetSeed(ULong_t seed=0)
Set the random generator seed.
Definition TRandom.cxx:608
Basic string class.
Definition TString.h:139
const char * Data() const
Definition TString.h:380
static TString Format(const char *fmt,...)
Static method which formats a string using a printf style format descriptor and return a TString.
Definition TString.cxx:2356
virtual Int_t Exec(const char *shellcmd)
Execute a command.
Definition TSystem.cxx:640
virtual Bool_t AccessPathName(const char *path, EAccessMode mode=kFileExists)
Returns FALSE if one can access a file using the specified access mode.
Definition TSystem.cxx:1283
virtual void Setenv(const char *name, const char *value)
Set environment variable.
Definition TSystem.cxx:1634
A TTree represents a columnar dataset.
Definition TTree.h:79
RVec< PromoteType< T > > cos(const RVec< T > &v)
Definition RVec.hxx:1815
RVec< PromoteType< T > > sin(const RVec< T > &v)
Definition RVec.hxx:1814
return c1
Definition legend1.C:41
const Int_t n
Definition legend1.C:16
TH1F * h1
Definition legend1.C:5
TF1 * f1
Definition legend1.C:11
void EnableImplicitMT(UInt_t numthreads=0)
Enable ROOT's implicit multi-threading for all objects and methods that provide an internal paralleli...
Definition TROOT.cxx:539
UInt_t GetThreadPoolSize()
Returns the size of ROOT's thread pool.
Definition TROOT.cxx:577
constexpr Double_t Pi()
Definition TMath.h:37
TMarker m
Definition textangle.C:8
Author
Lorenzo Moneta

Definition in file TMVA_RNN_Classification.C.