Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TMVA_CNN_Classification.C File Reference

Detailed Description

View in nbviewer Open in SWAN TMVA Classification Example Using a Convolutional Neural Network

This is an example of using a CNN in TMVA. We do classification using a toy image data set that is generated when running the example macro

Running with nthreads = 16
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sig_tree of type Signal with 5000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg_tree of type Background with 5000 events
Factory : Booking method: ␛[1mBDT␛[0m
:
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree sig_tree
: Using variable vars[0] from array expression vars of size 256
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree bkg_tree
: Using variable vars[0] from array expression vars of size 256
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 4000
: Signal -- testing events : 1000
: Signal -- training and testing events: 5000
: Background -- training events : 4000
: Background -- testing events : 1000
: Background -- training and testing events: 5000
:
Factory : Booking method: ␛[1mTMVA_DNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:Layout=DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:Layout=DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: Layout: "DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0." [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: InputLayout: "0|0|0" [The Layout of the input]
: BatchLayout: "0|0|0" [The Layout of the batch]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Will now use the CPU architecture with BLAS and IMT support !
Factory : Booking method: ␛[1mTMVA_CNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:InputLayout=1|16|16:Layout=CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:InputLayout=1|16|16:Layout=CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "1|16|16" [The Layout of the input]
: Layout: "CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,MaxEpochs=20,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0" [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: BatchLayout: "0|0|0" [The Layout of the batch]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Will now use the CPU architecture with BLAS and IMT support !
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape (Reshape) (None, 16, 16, 1) 0
conv2d (Conv2D) (None, 16, 16, 10) 100
batch_normalization (BatchN (None, 16, 16, 10) 40
ormalization)
conv2d_1 (Conv2D) (None, 16, 16, 10) 910
max_pooling2d (MaxPooling2D (None, 15, 15, 10) 0
)
flatten (Flatten) (None, 2250) 0
dense (Dense) (None, 256) 576256
dense_1 (Dense) (None, 2) 514
=================================================================
Total params: 577,820
Trainable params: 577,800
Non-trainable params: 20
_________________________________________________________________
Factory : Booking method: ␛[1mPyKeras␛[0m
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Loading Keras Model
: Loaded model from file: model_cnn.h5
Factory : Booking method: ␛[1mPyTorch␛[0m
:
: Using PyTorch - setting special configuration options
: Using PyTorch version 1
: Setup PyTorch Model
: Executing user initialization code from /home/sftnight/build/workspace/root-makedoc-v626/rootspi/rdoc/src/v6-26-00-patches.build/tutorials/tmva/PyTorch_Generate_CNN_Model.py
: Loaded pytorch train function:
: Loaded pytorch optimizer:
: Loaded pytorch loss function:
: Loaded pytorch predict function:
: Load model from file: PyTorchModelCNN.pt
Factory : ␛[1mTrain all methods␛[0m
Factory : Train method: BDT for Classification
:
BDT : #events: (reweighted) sig: 4000 bkg: 4000
: #events: (unweighted) sig: 4000 bkg: 4000
: Training 400 Decision Trees ... patience please
: Elapsed time for training with 8000 events: 5.85 sec
BDT : [dataset] : Evaluation of BDT on training sample (8000 events)
: Elapsed time for evaluation of 8000 events: 0.183 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_BDT.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_BDT.class.C␛[0m
: TMVA_CNN_ClassificationOutput.root:/dataset/Method_BDT/BDT
Factory : Training finished
:
Factory : Train method: TMVA_DNN_CPU for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 16
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 8 Input = ( 1, 1, 256 ) Batch size = 100 Loss function = C
Layer 0 DENSE Layer: ( Input = 256 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 1 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1
Layer 2 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 3 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1
Layer 4 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 5 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1
Layer 6 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 7 DENSE Layer: ( Input = 100 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity
: Using 6400 events for training and 1600 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = inf
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.741502 0.70369 0.976663 0.0828721 7160.51 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.559157 0.629797 0.975368 0.0812953 7158.26 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.463003 0.53698 0.981586 0.0826189 7119.28 0
: 4 Minimum Test error found - save the configuration
: 4 | 0.403483 0.482178 0.990321 0.0829113 7053.04 0
: 5 | 0.359644 0.591268 0.99239 0.0828643 7036.63 1
: 6 Minimum Test error found - save the configuration
: 6 | 0.33744 0.4334 0.969413 0.0817118 7209.63 0
: 7 | 0.333686 0.467198 0.97384 0.0808129 7166.64 1
: 8 | 0.313341 0.498473 0.970526 0.0808045 7193.26 2
: 9 | 0.296203 1.12699 0.966362 0.0807369 7226.53 3
: 10 | 0.280678 0.655753 0.974019 0.0810363 7166.99 4
: 11 | 0.256592 0.567719 0.979547 0.0808644 7121.53 5
: 12 | 0.250049 0.575484 1.00202 0.0811242 6949.77 6
:
: Elapsed time for training with 8000 events: 11.9 sec
: Evaluate deep neural network on CPU using batches with size = 100
:
TMVA_DNN_CPU : [dataset] : Evaluation of TMVA_DNN_CPU on training sample (8000 events)
: Elapsed time for evaluation of 8000 events: 0.402 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_DNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_DNN_CPU.class.C␛[0m
Factory : Training finished
:
Factory : Train method: TMVA_CNN_CPU for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 16
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 7 Input = ( 1, 16, 16 ) Batch size = 100 Loss function = C
Layer 0 CONV LAYER: ( W = 16 , H = 16 , D = 10 ) Filter ( W = 3 , H = 3 ) Output = ( 100 , 10 , 10 , 256 ) Activation Function = Relu
Layer 1 BATCH NORM Layer: Input/Output = ( 10 , 256 , 100 ) Norm dim = 10 axis = 1
Layer 2 CONV LAYER: ( W = 16 , H = 16 , D = 10 ) Filter ( W = 3 , H = 3 ) Output = ( 100 , 10 , 10 , 256 ) Activation Function = Relu
Layer 3 POOL Layer: ( W = 15 , H = 15 , D = 10 ) Filter ( W = 2 , H = 2 ) Output = ( 100 , 10 , 10 , 225 )
Layer 4 RESHAPE Layer Input = ( 10 , 15 , 15 ) Output = ( 1 , 100 , 2250 )
Layer 5 DENSE Layer: ( Input = 2250 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 6 DENSE Layer: ( Input = 100 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity
: Using 6400 events for training and 1600 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = inf
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | inf 0.694583 6.74097 0.513599 1027.72 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.66794 0.672299 6.74513 0.535105 1030.59 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.599537 0.557678 6.68896 0.521041 1037.63 0
: 4 Minimum Test error found - save the configuration
: 4 | 0.513314 0.556397 6.5909 0.513977 1053.16 0
: 5 Minimum Test error found - save the configuration
: 5 | 0.497979 0.496244 6.64835 0.516408 1043.71 0
: 6 | 0.448669 0.511084 6.61446 0.513615 1049.03 1
: 7 Minimum Test error found - save the configuration
: 7 | 0.409384 0.493092 6.67361 0.520654 1040.15 0
: 8 Minimum Test error found - save the configuration
: 8 | 0.386409 0.445584 6.58477 0.516239 1054.62 0
: 9 Minimum Test error found - save the configuration
: 9 | 0.369449 0.444847 6.6813 0.522084 1039.09 0
: 10 Minimum Test error found - save the configuration
: 10 | 0.363185 0.432413 6.77983 0.526145 1023.4 0
: 11 | 0.337638 0.43644 6.86601 0.516499 1007.95 1
: 12 | 0.343887 0.562616 7.50285 0.520038 916.536 2
: 13 Minimum Test error found - save the configuration
: 13 | 0.336477 0.423921 7.89389 0.520016 867.929 0
: 14 | 0.336369 0.472096 7.76016 0.527342 884.856 1
: 15 | 0.321832 0.454919 7.58717 0.512282 904.607 2
: 16 | 0.325337 0.473019 7.52352 0.510723 912.617 3
: 17 | 0.30418 0.434876 7.52553 0.515975 913.04 4
: 18 | 0.275549 0.438255 7.62778 0.516272 899.949 5
: 19 | 0.273049 0.438667 7.65715 0.554248 901.04 6
:
: Elapsed time for training with 8000 events: 135 sec
: Evaluate deep neural network on CPU using batches with size = 100
:
TMVA_CNN_CPU : [dataset] : Evaluation of TMVA_CNN_CPU on training sample (8000 events)
: Elapsed time for evaluation of 8000 events: 2.62 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_CNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_CNN_CPU.class.C␛[0m
Factory : Training finished
:
Factory : Train method: PyKeras for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ PyKeras ] :␛[0m
:
: Keras is a high-level API for the Theano and Tensorflow packages.
: This method wraps the training and predictions steps of the Keras
: Python package for TMVA, so that dataloading, preprocessing and
: evaluation can be done within the TMVA system. To use this Keras
: interface, you have to generate a model with Keras first. Then,
: this model can be loaded and trained in TMVA.
:
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Split TMVA training data in 6400 training events and 1600 validation events
: Training Model Summary
custom objects for loading model : {'optimizer': <class 'torch.optim.adam.Adam'>, 'criterion': BCELoss(), 'train_func': <function fit at 0x7f3694715280>, 'predict_func': <function predict at 0x7f3694715310>}
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape (Reshape) (None, 16, 16, 1) 0
conv2d (Conv2D) (None, 16, 16, 10) 100
batch_normalization (BatchN (None, 16, 16, 10) 40
ormalization)
conv2d_1 (Conv2D) (None, 16, 16, 10) 910
max_pooling2d (MaxPooling2D (None, 15, 15, 10) 0
)
flatten (Flatten) (None, 2250) 0
dense (Dense) (None, 256) 576256
dense_1 (Dense) (None, 2) 514
=================================================================
Total params: 577,820
Trainable params: 577,800
Non-trainable params: 20
_________________________________________________________________
: Option SaveBestOnly: Only model weights with smallest validation loss will be stored
Epoch 1/20
1/64 [..............................] - ETA: 47s - loss: 0.8134 - accuracy: 0.5300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
4/64 [>.............................] - ETA: 1s - loss: 2.1604 - accuracy: 0.5050 ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
8/64 [==>...........................] - ETA: 0s - loss: 1.7434 - accuracy: 0.5063␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
12/64 [====>.........................] - ETA: 0s - loss: 1.4338 - accuracy: 0.5008␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/64 [======>.......................] - ETA: 0s - loss: 1.2778 - accuracy: 0.5038␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/64 [========>.....................] - ETA: 0s - loss: 1.1644 - accuracy: 0.5110␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/64 [==========>...................] - ETA: 0s - loss: 1.0880 - accuracy: 0.5213␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
28/64 [============>.................] - ETA: 0s - loss: 1.0355 - accuracy: 0.5186␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/64 [==============>...............] - ETA: 0s - loss: 0.9910 - accuracy: 0.5234␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.9584 - accuracy: 0.5233␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
40/64 [=================>............] - ETA: 0s - loss: 0.9324 - accuracy: 0.5217␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
44/64 [===================>..........] - ETA: 0s - loss: 0.9105 - accuracy: 0.5248␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
48/64 [=====================>........] - ETA: 0s - loss: 0.8916 - accuracy: 0.5258␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
52/64 [=======================>......] - ETA: 0s - loss: 0.8760 - accuracy: 0.5281␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
56/64 [=========================>....] - ETA: 0s - loss: 0.8625 - accuracy: 0.5288␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.8510 - accuracy: 0.5307␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - ETA: 0s - loss: 0.8406 - accuracy: 0.5319
Epoch 1: val_loss improved from inf to 0.70421, saving model to trained_model_cnn.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 3s 30ms/step - loss: 0.8406 - accuracy: 0.5319 - val_loss: 0.7042 - val_accuracy: 0.4975
Epoch 2/20
1/64 [..............................] - ETA: 0s - loss: 0.6823 - accuracy: 0.5400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.6681 - accuracy: 0.6160␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
8/64 [==>...........................] - ETA: 0s - loss: 0.6737 - accuracy: 0.5938␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
12/64 [====>.........................] - ETA: 0s - loss: 0.6711 - accuracy: 0.6050␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/64 [======>.......................] - ETA: 0s - loss: 0.6679 - accuracy: 0.6181␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/64 [========>.....................] - ETA: 0s - loss: 0.6674 - accuracy: 0.6135␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/64 [==========>...................] - ETA: 0s - loss: 0.6684 - accuracy: 0.6100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
28/64 [============>.................] - ETA: 0s - loss: 0.6701 - accuracy: 0.6057␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/64 [==============>...............] - ETA: 0s - loss: 0.6699 - accuracy: 0.6019␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.6693 - accuracy: 0.5994␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
40/64 [=================>............] - ETA: 0s - loss: 0.6691 - accuracy: 0.5982␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
44/64 [===================>..........] - ETA: 0s - loss: 0.6692 - accuracy: 0.5957␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
48/64 [=====================>........] - ETA: 0s - loss: 0.6676 - accuracy: 0.5971␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
52/64 [=======================>......] - ETA: 0s - loss: 0.6681 - accuracy: 0.5954␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
56/64 [=========================>....] - ETA: 0s - loss: 0.6664 - accuracy: 0.5986␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.6659 - accuracy: 0.5965␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - ETA: 0s - loss: 0.6636 - accuracy: 0.6005
Epoch 2: val_loss did not improve from 0.70421
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 16ms/step - loss: 0.6636 - accuracy: 0.6005 - val_loss: 0.7409 - val_accuracy: 0.4969
Epoch 3/20
1/64 [..............................] - ETA: 0s - loss: 0.6359 - accuracy: 0.5700␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.6553 - accuracy: 0.5940␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/64 [===>..........................] - ETA: 0s - loss: 0.6561 - accuracy: 0.5844␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
13/64 [=====>........................] - ETA: 0s - loss: 0.6586 - accuracy: 0.5815␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/64 [======>.......................] - ETA: 0s - loss: 0.6548 - accuracy: 0.5965␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/64 [========>.....................] - ETA: 0s - loss: 0.6507 - accuracy: 0.5995␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/64 [==========>...................] - ETA: 0s - loss: 0.6480 - accuracy: 0.6054␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
28/64 [============>.................] - ETA: 0s - loss: 0.6420 - accuracy: 0.6193␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/64 [==============>...............] - ETA: 0s - loss: 0.6383 - accuracy: 0.6272␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
35/64 [===============>..............] - ETA: 0s - loss: 0.6357 - accuracy: 0.6320␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
39/64 [=================>............] - ETA: 0s - loss: 0.6356 - accuracy: 0.6356␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
43/64 [===================>..........] - ETA: 0s - loss: 0.6339 - accuracy: 0.6393␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
47/64 [=====================>........] - ETA: 0s - loss: 0.6307 - accuracy: 0.6457␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
51/64 [======================>.......] - ETA: 0s - loss: 0.6297 - accuracy: 0.6486␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
55/64 [========================>.....] - ETA: 0s - loss: 0.6275 - accuracy: 0.6525␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
59/64 [==========================>...] - ETA: 0s - loss: 0.6255 - accuracy: 0.6559␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/64 [============================>.] - ETA: 0s - loss: 0.6240 - accuracy: 0.6590
Epoch 3: val_loss improved from 0.70421 to 0.59113, saving model to trained_model_cnn.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 17ms/step - loss: 0.6238 - accuracy: 0.6598 - val_loss: 0.5911 - val_accuracy: 0.7063
Epoch 4/20
1/64 [..............................] - ETA: 0s - loss: 0.5559 - accuracy: 0.7800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.5461 - accuracy: 0.7620␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/64 [===>..........................] - ETA: 0s - loss: 0.5556 - accuracy: 0.7344␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
13/64 [=====>........................] - ETA: 0s - loss: 0.5726 - accuracy: 0.7123␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/64 [======>.......................] - ETA: 0s - loss: 0.5736 - accuracy: 0.7082␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/64 [========>.....................] - ETA: 0s - loss: 0.5683 - accuracy: 0.7152␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/64 [===========>..................] - ETA: 0s - loss: 0.5633 - accuracy: 0.7177␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/64 [=============>................] - ETA: 0s - loss: 0.5611 - accuracy: 0.7165␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.5579 - accuracy: 0.7228␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
40/64 [=================>............] - ETA: 0s - loss: 0.5541 - accuracy: 0.7253␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
44/64 [===================>..........] - ETA: 0s - loss: 0.5503 - accuracy: 0.7270␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
48/64 [=====================>........] - ETA: 0s - loss: 0.5463 - accuracy: 0.7315␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
52/64 [=======================>......] - ETA: 0s - loss: 0.5441 - accuracy: 0.7344␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
57/64 [=========================>....] - ETA: 0s - loss: 0.5473 - accuracy: 0.7302␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/64 [============================>.] - ETA: 0s - loss: 0.5457 - accuracy: 0.7303
Epoch 4: val_loss improved from 0.59113 to 0.55636, saving model to trained_model_cnn.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 15ms/step - loss: 0.5463 - accuracy: 0.7287 - val_loss: 0.5564 - val_accuracy: 0.7212
Epoch 5/20
1/64 [..............................] - ETA: 0s - loss: 0.4792 - accuracy: 0.7900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.4840 - accuracy: 0.7620␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/64 [===>..........................] - ETA: 0s - loss: 0.4918 - accuracy: 0.7600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
14/64 [=====>........................] - ETA: 0s - loss: 0.4875 - accuracy: 0.7643␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
18/64 [=======>......................] - ETA: 0s - loss: 0.4816 - accuracy: 0.7722␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
23/64 [=========>....................] - ETA: 0s - loss: 0.4823 - accuracy: 0.7787␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
27/64 [===========>..................] - ETA: 0s - loss: 0.4762 - accuracy: 0.7826␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/64 [==============>...............] - ETA: 0s - loss: 0.4703 - accuracy: 0.7859␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.4724 - accuracy: 0.7803␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
41/64 [==================>...........] - ETA: 0s - loss: 0.4737 - accuracy: 0.7780␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
45/64 [====================>.........] - ETA: 0s - loss: 0.4776 - accuracy: 0.7742␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
49/64 [=====================>........] - ETA: 0s - loss: 0.4768 - accuracy: 0.7743␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
53/64 [=======================>......] - ETA: 0s - loss: 0.4749 - accuracy: 0.7743␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
57/64 [=========================>....] - ETA: 0s - loss: 0.4742 - accuracy: 0.7747␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
61/64 [===========================>..] - ETA: 0s - loss: 0.4730 - accuracy: 0.7764
Epoch 5: val_loss improved from 0.55636 to 0.50403, saving model to trained_model_cnn.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 15ms/step - loss: 0.4703 - accuracy: 0.7789 - val_loss: 0.5040 - val_accuracy: 0.7631
Epoch 6/20
1/64 [..............................] - ETA: 0s - loss: 0.4047 - accuracy: 0.8000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.4346 - accuracy: 0.8040␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/64 [===>..........................] - ETA: 0s - loss: 0.4244 - accuracy: 0.8100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
13/64 [=====>........................] - ETA: 0s - loss: 0.4117 - accuracy: 0.8138␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
18/64 [=======>......................] - ETA: 0s - loss: 0.4109 - accuracy: 0.8139␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
22/64 [=========>....................] - ETA: 0s - loss: 0.4064 - accuracy: 0.8191␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/64 [===========>..................] - ETA: 0s - loss: 0.4092 - accuracy: 0.8200␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/64 [=============>................] - ETA: 0s - loss: 0.4156 - accuracy: 0.8145␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.4184 - accuracy: 0.8147␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
41/64 [==================>...........] - ETA: 0s - loss: 0.4198 - accuracy: 0.8107␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
46/64 [====================>.........] - ETA: 0s - loss: 0.4187 - accuracy: 0.8124␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
51/64 [======================>.......] - ETA: 0s - loss: 0.4156 - accuracy: 0.8129␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
56/64 [=========================>....] - ETA: 0s - loss: 0.4157 - accuracy: 0.8130␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
61/64 [===========================>..] - ETA: 0s - loss: 0.4187 - accuracy: 0.8110
Epoch 6: val_loss improved from 0.50403 to 0.43741, saving model to trained_model_cnn.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 15ms/step - loss: 0.4157 - accuracy: 0.8128 - val_loss: 0.4374 - val_accuracy: 0.7987
Epoch 7/20
1/64 [..............................] - ETA: 0s - loss: 0.3217 - accuracy: 0.8800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/64 [=>............................] - ETA: 0s - loss: 0.3850 - accuracy: 0.8250␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/64 [====>.........................] - ETA: 0s - loss: 0.3942 - accuracy: 0.8264␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/64 [======>.......................] - ETA: 0s - loss: 0.4013 - accuracy: 0.8250␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/64 [========>.....................] - ETA: 0s - loss: 0.3975 - accuracy: 0.8260␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/64 [==========>...................] - ETA: 0s - loss: 0.3916 - accuracy: 0.8320␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/64 [=============>................] - ETA: 0s - loss: 0.3912 - accuracy: 0.8320␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
35/64 [===============>..............] - ETA: 0s - loss: 0.3869 - accuracy: 0.8337␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
40/64 [=================>............] - ETA: 0s - loss: 0.3852 - accuracy: 0.8332␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
45/64 [====================>.........] - ETA: 0s - loss: 0.3848 - accuracy: 0.8327␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
50/64 [======================>.......] - ETA: 0s - loss: 0.3858 - accuracy: 0.8318␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
55/64 [========================>.....] - ETA: 0s - loss: 0.3820 - accuracy: 0.8325␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.3856 - accuracy: 0.8318
Epoch 7: val_loss did not improve from 0.43741
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 13ms/step - loss: 0.3887 - accuracy: 0.8295 - val_loss: 0.4517 - val_accuracy: 0.7781
Epoch 8/20
1/64 [..............................] - ETA: 0s - loss: 0.4500 - accuracy: 0.8100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/64 [=>............................] - ETA: 0s - loss: 0.4358 - accuracy: 0.8000␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/64 [====>.........................] - ETA: 0s - loss: 0.4114 - accuracy: 0.8164␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/64 [======>.......................] - ETA: 0s - loss: 0.3957 - accuracy: 0.8269␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/64 [========>.....................] - ETA: 0s - loss: 0.3982 - accuracy: 0.8219␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/64 [===========>..................] - ETA: 0s - loss: 0.3932 - accuracy: 0.8250␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/64 [=============>................] - ETA: 0s - loss: 0.3939 - accuracy: 0.8258␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.3929 - accuracy: 0.8272␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
40/64 [=================>............] - ETA: 0s - loss: 0.3897 - accuracy: 0.8295␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
45/64 [====================>.........] - ETA: 0s - loss: 0.3852 - accuracy: 0.8327␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
50/64 [======================>.......] - ETA: 0s - loss: 0.3787 - accuracy: 0.8340␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
55/64 [========================>.....] - ETA: 0s - loss: 0.3759 - accuracy: 0.8369␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.3765 - accuracy: 0.8363␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - ETA: 0s - loss: 0.3733 - accuracy: 0.8373
Epoch 8: val_loss improved from 0.43741 to 0.42332, saving model to trained_model_cnn.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 14ms/step - loss: 0.3733 - accuracy: 0.8373 - val_loss: 0.4233 - val_accuracy: 0.7981
Epoch 9/20
1/64 [..............................] - ETA: 0s - loss: 0.2951 - accuracy: 0.8800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/64 [=>............................] - ETA: 0s - loss: 0.3381 - accuracy: 0.8500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/64 [====>.........................] - ETA: 0s - loss: 0.3258 - accuracy: 0.8564␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/64 [======>.......................] - ETA: 0s - loss: 0.3240 - accuracy: 0.8544␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/64 [========>.....................] - ETA: 0s - loss: 0.3370 - accuracy: 0.8529␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/64 [==========>...................] - ETA: 0s - loss: 0.3329 - accuracy: 0.8508␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/64 [=============>................] - ETA: 0s - loss: 0.3383 - accuracy: 0.8480␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
34/64 [==============>...............] - ETA: 0s - loss: 0.3419 - accuracy: 0.8482␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
38/64 [================>.............] - ETA: 0s - loss: 0.3448 - accuracy: 0.8476␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
42/64 [==================>...........] - ETA: 0s - loss: 0.3494 - accuracy: 0.8445␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
46/64 [====================>.........] - ETA: 0s - loss: 0.3492 - accuracy: 0.8454␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
50/64 [======================>.......] - ETA: 0s - loss: 0.3499 - accuracy: 0.8460␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
55/64 [========================>.....] - ETA: 0s - loss: 0.3506 - accuracy: 0.8445␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.3506 - accuracy: 0.8440
Epoch 9: val_loss did not improve from 0.42332
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 14ms/step - loss: 0.3506 - accuracy: 0.8431 - val_loss: 0.5461 - val_accuracy: 0.7506
Epoch 10/20
1/64 [..............................] - ETA: 0s - loss: 0.3522 - accuracy: 0.8500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.3575 - accuracy: 0.8480␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/64 [===>..........................] - ETA: 0s - loss: 0.3478 - accuracy: 0.8540␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/64 [======>.......................] - ETA: 0s - loss: 0.3250 - accuracy: 0.8640␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/64 [========>.....................] - ETA: 0s - loss: 0.3288 - accuracy: 0.8625␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/64 [==========>...................] - ETA: 0s - loss: 0.3355 - accuracy: 0.8600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/64 [=============>................] - ETA: 0s - loss: 0.3364 - accuracy: 0.8607␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
35/64 [===============>..............] - ETA: 0s - loss: 0.3387 - accuracy: 0.8597␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
40/64 [=================>............] - ETA: 0s - loss: 0.3343 - accuracy: 0.8620␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
45/64 [====================>.........] - ETA: 0s - loss: 0.3358 - accuracy: 0.8604␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
50/64 [======================>.......] - ETA: 0s - loss: 0.3365 - accuracy: 0.8604␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
54/64 [========================>.....] - ETA: 0s - loss: 0.3398 - accuracy: 0.8585␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
59/64 [==========================>...] - ETA: 0s - loss: 0.3413 - accuracy: 0.8558␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - ETA: 0s - loss: 0.3472 - accuracy: 0.8527
Epoch 10: val_loss improved from 0.42332 to 0.42027, saving model to trained_model_cnn.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 14ms/step - loss: 0.3472 - accuracy: 0.8527 - val_loss: 0.4203 - val_accuracy: 0.8019
Epoch 11/20
1/64 [..............................] - ETA: 0s - loss: 0.2343 - accuracy: 0.8900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/64 [=>............................] - ETA: 0s - loss: 0.3780 - accuracy: 0.8333␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/64 [====>.........................] - ETA: 0s - loss: 0.3449 - accuracy: 0.8509␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/64 [======>.......................] - ETA: 0s - loss: 0.3560 - accuracy: 0.8407␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/64 [========>.....................] - ETA: 0s - loss: 0.3604 - accuracy: 0.8425␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/64 [==========>...................] - ETA: 0s - loss: 0.3540 - accuracy: 0.8468␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
30/64 [=============>................] - ETA: 0s - loss: 0.3461 - accuracy: 0.8507␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
35/64 [===============>..............] - ETA: 0s - loss: 0.3506 - accuracy: 0.8471␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
40/64 [=================>............] - ETA: 0s - loss: 0.3462 - accuracy: 0.8490␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
45/64 [====================>.........] - ETA: 0s - loss: 0.3417 - accuracy: 0.8518␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
50/64 [======================>.......] - ETA: 0s - loss: 0.3423 - accuracy: 0.8502␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
55/64 [========================>.....] - ETA: 0s - loss: 0.3378 - accuracy: 0.8522␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.3425 - accuracy: 0.8498
Epoch 11: val_loss did not improve from 0.42027
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 14ms/step - loss: 0.3401 - accuracy: 0.8512 - val_loss: 0.4671 - val_accuracy: 0.7856
Epoch 12/20
1/64 [..............................] - ETA: 0s - loss: 0.4228 - accuracy: 0.8400␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/64 [=>............................] - ETA: 0s - loss: 0.4048 - accuracy: 0.8233␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/64 [====>.........................] - ETA: 0s - loss: 0.3793 - accuracy: 0.8382␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/64 [======>.......................] - ETA: 0s - loss: 0.3720 - accuracy: 0.8433␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/64 [=======>......................] - ETA: 0s - loss: 0.3649 - accuracy: 0.8489␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
23/64 [=========>....................] - ETA: 0s - loss: 0.3683 - accuracy: 0.8430␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
27/64 [===========>..................] - ETA: 0s - loss: 0.3674 - accuracy: 0.8419␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/64 [=============>................] - ETA: 0s - loss: 0.3755 - accuracy: 0.8352␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
35/64 [===============>..............] - ETA: 0s - loss: 0.3764 - accuracy: 0.8323␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
39/64 [=================>............] - ETA: 0s - loss: 0.3786 - accuracy: 0.8310␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
43/64 [===================>..........] - ETA: 0s - loss: 0.3739 - accuracy: 0.8340␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
47/64 [=====================>........] - ETA: 0s - loss: 0.3756 - accuracy: 0.8334␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
51/64 [======================>.......] - ETA: 0s - loss: 0.3750 - accuracy: 0.8339␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
55/64 [========================>.....] - ETA: 0s - loss: 0.3684 - accuracy: 0.8378␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.3680 - accuracy: 0.8377␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - ETA: 0s - loss: 0.3676 - accuracy: 0.8367
Epoch 12: val_loss did not improve from 0.42027
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 15ms/step - loss: 0.3676 - accuracy: 0.8367 - val_loss: 0.4404 - val_accuracy: 0.7937
Epoch 13/20
1/64 [..............................] - ETA: 0s - loss: 0.2881 - accuracy: 0.8500␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.3469 - accuracy: 0.8560␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/64 [===>..........................] - ETA: 0s - loss: 0.3120 - accuracy: 0.8756␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
13/64 [=====>........................] - ETA: 0s - loss: 0.3239 - accuracy: 0.8669␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
18/64 [=======>......................] - ETA: 0s - loss: 0.3147 - accuracy: 0.8717␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
23/64 [=========>....................] - ETA: 0s - loss: 0.3206 - accuracy: 0.8657␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
28/64 [============>.................] - ETA: 0s - loss: 0.3282 - accuracy: 0.8618␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/64 [==============>...............] - ETA: 0s - loss: 0.3317 - accuracy: 0.8579␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
38/64 [================>.............] - ETA: 0s - loss: 0.3259 - accuracy: 0.8611␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
43/64 [===================>..........] - ETA: 0s - loss: 0.3222 - accuracy: 0.8626␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
48/64 [=====================>........] - ETA: 0s - loss: 0.3261 - accuracy: 0.8590␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
53/64 [=======================>......] - ETA: 0s - loss: 0.3325 - accuracy: 0.8549␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
58/64 [==========================>...] - ETA: 0s - loss: 0.3318 - accuracy: 0.8564␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/64 [============================>.] - ETA: 0s - loss: 0.3310 - accuracy: 0.8568
Epoch 13: val_loss did not improve from 0.42027
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 15ms/step - loss: 0.3305 - accuracy: 0.8573 - val_loss: 0.4318 - val_accuracy: 0.8094
Epoch 14/20
1/64 [..............................] - ETA: 0s - loss: 0.3426 - accuracy: 0.8600␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.3076 - accuracy: 0.8720␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/64 [===>..........................] - ETA: 0s - loss: 0.2984 - accuracy: 0.8780␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/64 [======>.......................] - ETA: 0s - loss: 0.2952 - accuracy: 0.8780␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
20/64 [========>.....................] - ETA: 0s - loss: 0.2942 - accuracy: 0.8755␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/64 [==========>...................] - ETA: 0s - loss: 0.3006 - accuracy: 0.8725␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
28/64 [============>.................] - ETA: 0s - loss: 0.2996 - accuracy: 0.8743␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/64 [==============>...............] - ETA: 0s - loss: 0.3019 - accuracy: 0.8725␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.3023 - accuracy: 0.8722␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
40/64 [=================>............] - ETA: 0s - loss: 0.3032 - accuracy: 0.8712␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
44/64 [===================>..........] - ETA: 0s - loss: 0.3034 - accuracy: 0.8711␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
48/64 [=====================>........] - ETA: 0s - loss: 0.3004 - accuracy: 0.8723␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
52/64 [=======================>......] - ETA: 0s - loss: 0.2994 - accuracy: 0.8733␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
56/64 [=========================>....] - ETA: 0s - loss: 0.3007 - accuracy: 0.8721␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.3064 - accuracy: 0.8690
Epoch 14: val_loss did not improve from 0.42027
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 16ms/step - loss: 0.3068 - accuracy: 0.8686 - val_loss: 0.4362 - val_accuracy: 0.8006
Epoch 15/20
1/64 [..............................] - ETA: 0s - loss: 0.3186 - accuracy: 0.8800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.2952 - accuracy: 0.8680␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/64 [===>..........................] - ETA: 0s - loss: 0.2980 - accuracy: 0.8722␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
13/64 [=====>........................] - ETA: 0s - loss: 0.2992 - accuracy: 0.8746␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/64 [======>.......................] - ETA: 0s - loss: 0.3228 - accuracy: 0.8594␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/64 [========>.....................] - ETA: 0s - loss: 0.3307 - accuracy: 0.8567␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
25/64 [==========>...................] - ETA: 0s - loss: 0.3296 - accuracy: 0.8556␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
29/64 [============>.................] - ETA: 0s - loss: 0.3256 - accuracy: 0.8569␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/64 [==============>...............] - ETA: 0s - loss: 0.3274 - accuracy: 0.8573␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
37/64 [================>.............] - ETA: 0s - loss: 0.3281 - accuracy: 0.8581␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
41/64 [==================>...........] - ETA: 0s - loss: 0.3273 - accuracy: 0.8578␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
45/64 [====================>.........] - ETA: 0s - loss: 0.3254 - accuracy: 0.8591␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
49/64 [=====================>........] - ETA: 0s - loss: 0.3205 - accuracy: 0.8622␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
53/64 [=======================>......] - ETA: 0s - loss: 0.3189 - accuracy: 0.8643␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
57/64 [=========================>....] - ETA: 0s - loss: 0.3153 - accuracy: 0.8660␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
61/64 [===========================>..] - ETA: 0s - loss: 0.3128 - accuracy: 0.8659
Epoch 15: val_loss did not improve from 0.42027
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 15ms/step - loss: 0.3116 - accuracy: 0.8670 - val_loss: 0.4247 - val_accuracy: 0.8000
Epoch 16/20
1/64 [..............................] - ETA: 0s - loss: 0.2503 - accuracy: 0.8900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.3024 - accuracy: 0.8680␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
9/64 [===>..........................] - ETA: 0s - loss: 0.2772 - accuracy: 0.8844␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
14/64 [=====>........................] - ETA: 0s - loss: 0.2837 - accuracy: 0.8814␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/64 [=======>......................] - ETA: 0s - loss: 0.2951 - accuracy: 0.8758␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/64 [==========>...................] - ETA: 0s - loss: 0.2963 - accuracy: 0.8767␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
28/64 [============>.................] - ETA: 0s - loss: 0.3050 - accuracy: 0.8725␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
32/64 [==============>...............] - ETA: 0s - loss: 0.3102 - accuracy: 0.8672␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.3103 - accuracy: 0.8664␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
41/64 [==================>...........] - ETA: 0s - loss: 0.3075 - accuracy: 0.8688␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
46/64 [====================>.........] - ETA: 0s - loss: 0.3067 - accuracy: 0.8696␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
51/64 [======================>.......] - ETA: 0s - loss: 0.3043 - accuracy: 0.8704␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
56/64 [=========================>....] - ETA: 0s - loss: 0.3063 - accuracy: 0.8687␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.3066 - accuracy: 0.8688
Epoch 16: val_loss did not improve from 0.42027
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 14ms/step - loss: 0.3090 - accuracy: 0.8677 - val_loss: 0.4222 - val_accuracy: 0.8138
Epoch 17/20
1/64 [..............................] - ETA: 1s - loss: 0.2901 - accuracy: 0.8800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
5/64 [=>............................] - ETA: 0s - loss: 0.3092 - accuracy: 0.8740␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
10/64 [===>..........................] - ETA: 0s - loss: 0.2962 - accuracy: 0.8770␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/64 [======>.......................] - ETA: 0s - loss: 0.2847 - accuracy: 0.8867␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/64 [=======>......................] - ETA: 0s - loss: 0.2860 - accuracy: 0.8858␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
24/64 [==========>...................] - ETA: 0s - loss: 0.2851 - accuracy: 0.8821␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
29/64 [============>.................] - ETA: 0s - loss: 0.2808 - accuracy: 0.8831␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
34/64 [==============>...............] - ETA: 0s - loss: 0.2798 - accuracy: 0.8850␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
39/64 [=================>............] - ETA: 0s - loss: 0.2810 - accuracy: 0.8838␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
44/64 [===================>..........] - ETA: 0s - loss: 0.2797 - accuracy: 0.8855␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
49/64 [=====================>........] - ETA: 0s - loss: 0.2812 - accuracy: 0.8849␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
54/64 [========================>.....] - ETA: 0s - loss: 0.2815 - accuracy: 0.8856␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
59/64 [==========================>...] - ETA: 0s - loss: 0.2786 - accuracy: 0.8863␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - ETA: 0s - loss: 0.2808 - accuracy: 0.8836
Epoch 17: val_loss improved from 0.42027 to 0.41981, saving model to trained_model_cnn.h5
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 14ms/step - loss: 0.2808 - accuracy: 0.8836 - val_loss: 0.4198 - val_accuracy: 0.8069
Epoch 18/20
1/64 [..............................] - ETA: 0s - loss: 0.2634 - accuracy: 0.8800␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/64 [=>............................] - ETA: 0s - loss: 0.2632 - accuracy: 0.8933␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/64 [====>.........................] - ETA: 0s - loss: 0.2561 - accuracy: 0.9009␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/64 [======>.......................] - ETA: 0s - loss: 0.2719 - accuracy: 0.8900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/64 [========>.....................] - ETA: 0s - loss: 0.2717 - accuracy: 0.8900␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/64 [===========>..................] - ETA: 0s - loss: 0.2635 - accuracy: 0.8938␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/64 [=============>................] - ETA: 0s - loss: 0.2687 - accuracy: 0.8890␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.2651 - accuracy: 0.8897␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
41/64 [==================>...........] - ETA: 0s - loss: 0.2693 - accuracy: 0.8880␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
46/64 [====================>.........] - ETA: 0s - loss: 0.2720 - accuracy: 0.8867␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
51/64 [======================>.......] - ETA: 0s - loss: 0.2727 - accuracy: 0.8863␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
56/64 [=========================>....] - ETA: 0s - loss: 0.2788 - accuracy: 0.8830␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
61/64 [===========================>..] - ETA: 0s - loss: 0.2794 - accuracy: 0.8836
Epoch 18: val_loss did not improve from 0.41981
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 14ms/step - loss: 0.2802 - accuracy: 0.8839 - val_loss: 0.4290 - val_accuracy: 0.8031
Epoch 19/20
1/64 [..............................] - ETA: 0s - loss: 0.2670 - accuracy: 0.9100␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/64 [=>............................] - ETA: 0s - loss: 0.2814 - accuracy: 0.8783␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/64 [====>.........................] - ETA: 0s - loss: 0.2561 - accuracy: 0.8882␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/64 [======>.......................] - ETA: 0s - loss: 0.2712 - accuracy: 0.8856␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
21/64 [========>.....................] - ETA: 0s - loss: 0.2722 - accuracy: 0.8867␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
26/64 [===========>..................] - ETA: 0s - loss: 0.2920 - accuracy: 0.8750␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/64 [=============>................] - ETA: 0s - loss: 0.2943 - accuracy: 0.8752␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
36/64 [===============>..............] - ETA: 0s - loss: 0.3086 - accuracy: 0.8678␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
41/64 [==================>...........] - ETA: 0s - loss: 0.3135 - accuracy: 0.8641␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
46/64 [====================>.........] - ETA: 0s - loss: 0.3098 - accuracy: 0.8667␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
51/64 [======================>.......] - ETA: 0s - loss: 0.3050 - accuracy: 0.8698␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
55/64 [========================>.....] - ETA: 0s - loss: 0.3012 - accuracy: 0.8729␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/64 [===========================>..] - ETA: 0s - loss: 0.3006 - accuracy: 0.8742
Epoch 19: val_loss did not improve from 0.41981
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 13ms/step - loss: 0.2993 - accuracy: 0.8747 - val_loss: 0.4422 - val_accuracy: 0.7969
Epoch 20/20
1/64 [..............................] - ETA: 0s - loss: 0.2484 - accuracy: 0.9300␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
6/64 [=>............................] - ETA: 0s - loss: 0.2578 - accuracy: 0.9050␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
11/64 [====>.........................] - ETA: 0s - loss: 0.2973 - accuracy: 0.8836␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/64 [======>.......................] - ETA: 0s - loss: 0.2809 - accuracy: 0.8907␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
19/64 [=======>......................] - ETA: 0s - loss: 0.2718 - accuracy: 0.8953␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
23/64 [=========>....................] - ETA: 0s - loss: 0.2654 - accuracy: 0.8961␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
27/64 [===========>..................] - ETA: 0s - loss: 0.2734 - accuracy: 0.8930␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/64 [=============>................] - ETA: 0s - loss: 0.2731 - accuracy: 0.8916␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
35/64 [===============>..............] - ETA: 0s - loss: 0.2694 - accuracy: 0.8923␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
39/64 [=================>............] - ETA: 0s - loss: 0.2701 - accuracy: 0.8908␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
44/64 [===================>..........] - ETA: 0s - loss: 0.2758 - accuracy: 0.8866␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
49/64 [=====================>........] - ETA: 0s - loss: 0.2809 - accuracy: 0.8822␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
53/64 [=======================>......] - ETA: 0s - loss: 0.2799 - accuracy: 0.8821␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
57/64 [=========================>....] - ETA: 0s - loss: 0.2807 - accuracy: 0.8802␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
62/64 [============================>.] - ETA: 0s - loss: 0.2805 - accuracy: 0.8811
Epoch 20: val_loss did not improve from 0.41981
␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
64/64 [==============================] - 1s 14ms/step - loss: 0.2800 - accuracy: 0.8819 - val_loss: 0.4603 - val_accuracy: 0.7969
: Getting training history for item:0 name = 'loss'
: Getting training history for item:1 name = 'accuracy'
: Getting training history for item:2 name = 'val_loss'
: Getting training history for item:3 name = 'val_accuracy'
: Elapsed time for training with 8000 events: 22.9 sec
PyKeras : [dataset] : Evaluation of PyKeras on training sample (8000 events)
1/250 [..............................] - ETA: 33s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
17/250 [=>............................] - ETA: 0s ␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
33/250 [==>...........................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
49/250 [====>.........................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
65/250 [======>.......................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
82/250 [========>.....................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
98/250 [==========>...................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
116/250 [============>.................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
134/250 [===============>..............] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
152/250 [=================>............] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
170/250 [===================>..........] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
187/250 [=====================>........] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
205/250 [=======================>......] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
223/250 [=========================>....] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
241/250 [===========================>..] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
250/250 [==============================] - 1s 3ms/step
: Elapsed time for evaluation of 8000 events: 1.1 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_PyKeras.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_PyKeras.class.C␛[0m
Factory : Training finished
:
Factory : Train method: PyTorch for Classification
:
:
: ␛[1m================================================================␛[0m
: ␛[1mH e l p f o r M V A m e t h o d [ PyTorch ] :␛[0m
:
: PyTorch is a scientific computing package supporting
: automatic differentiation. This method wraps the training
: and predictions steps of the PyTorch Python package for
: TMVA, so that dataloading, preprocessing and evaluation
: can be done within the TMVA system. To use this PyTorch
: interface, you need to generatea model with PyTorch first.
: Then, this model can be loaded and trained in TMVA.
:
:
: <Suppress this message by specifying "!H" in the booking option>
: ␛[1m================================================================␛[0m
:
: Split TMVA training data in 6400 training events and 1600 validation events
: Print Training Model Architecture
: Option SaveBestOnly: Only model weights with smallest validation loss will be stored
RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Reshape)
(1): RecursiveScriptModule(original_name=Conv2d)
(2): RecursiveScriptModule(original_name=ReLU)
(3): RecursiveScriptModule(original_name=BatchNorm2d)
(4): RecursiveScriptModule(original_name=Conv2d)
(5): RecursiveScriptModule(original_name=ReLU)
(6): RecursiveScriptModule(original_name=MaxPool2d)
(7): RecursiveScriptModule(original_name=Flatten)
(8): RecursiveScriptModule(original_name=Linear)
(9): RecursiveScriptModule(original_name=ReLU)
(10): RecursiveScriptModule(original_name=Linear)
(11): RecursiveScriptModule(original_name=Sigmoid)
)
[1, 4] train loss: 1.512
[1, 8] train loss: 0.751
[1, 12] train loss: 0.698
[1, 16] train loss: 0.696
[1, 20] train loss: 0.690
[1, 24] train loss: 0.695
[1, 28] train loss: 0.695
[1, 32] train loss: 0.693
[1, 36] train loss: 0.690
[1, 40] train loss: 0.689
[1, 44] train loss: 0.701
[1, 48] train loss: 0.692
[1, 52] train loss: 0.689
[1, 56] train loss: 0.692
[1, 60] train loss: 0.685
[1, 64] train loss: 0.699
[1] val loss: 0.689
[2, 4] train loss: 0.694
[2, 8] train loss: 0.693
[2, 12] train loss: 0.686
[2, 16] train loss: 0.688
[2, 20] train loss: 0.688
[2, 24] train loss: 0.687
[2, 28] train loss: 0.682
[2, 32] train loss: 0.675
[2, 36] train loss: 0.658
[2, 40] train loss: 0.683
[2, 44] train loss: 0.678
[2, 48] train loss: 0.690
[2, 52] train loss: 0.653
[2, 56] train loss: 0.627
[2, 60] train loss: 0.609
[2, 64] train loss: 0.622
[2] val loss: 0.620
[3, 4] train loss: 0.617
[3, 8] train loss: 0.609
[3, 12] train loss: 0.556
[3, 16] train loss: 0.549
[3, 20] train loss: 0.510
[3, 24] train loss: 0.489
[3, 28] train loss: 0.506
[3, 32] train loss: 0.458
[3, 36] train loss: 0.452
[3, 40] train loss: 0.479
[3, 44] train loss: 0.475
[3, 48] train loss: 0.449
[3, 52] train loss: 0.484
[3, 56] train loss: 0.421
[3, 60] train loss: 0.393
[3, 64] train loss: 0.428
[3] val loss: 0.511
[4, 4] train loss: 0.508
[4, 8] train loss: 0.461
[4, 12] train loss: 0.424
[4, 16] train loss: 0.441
[4, 20] train loss: 0.450
[4, 24] train loss: 0.465
[4, 28] train loss: 0.459
[4, 32] train loss: 0.451
[4, 36] train loss: 0.482
[4, 40] train loss: 0.451
[4, 44] train loss: 0.509
[4, 48] train loss: 0.544
[4, 52] train loss: 0.593
[4, 56] train loss: 0.551
[4, 60] train loss: 0.558
[4, 64] train loss: 0.517
[4] val loss: 0.478
[5, 4] train loss: 0.507
[5, 8] train loss: 0.462
[5, 12] train loss: 0.424
[5, 16] train loss: 0.421
[5, 20] train loss: 0.461
[5, 24] train loss: 0.436
[5, 28] train loss: 0.415
[5, 32] train loss: 0.416
[5, 36] train loss: 0.411
[5, 40] train loss: 0.426
[5, 44] train loss: 0.444
[5, 48] train loss: 0.452
[5, 52] train loss: 0.522
[5, 56] train loss: 0.434
[5, 60] train loss: 0.453
[5, 64] train loss: 0.459
[5] val loss: 0.577
[6, 4] train loss: 0.530
[6, 8] train loss: 0.471
[6, 12] train loss: 0.402
[6, 16] train loss: 0.416
[6, 20] train loss: 0.440
[6, 24] train loss: 0.411
[6, 28] train loss: 0.382
[6, 32] train loss: 0.398
[6, 36] train loss: 0.367
[6, 40] train loss: 0.385
[6, 44] train loss: 0.406
[6, 48] train loss: 0.422
[6, 52] train loss: 0.497
[6, 56] train loss: 0.400
[6, 60] train loss: 0.407
[6, 64] train loss: 0.409
[6] val loss: 0.478
[7, 4] train loss: 0.505
[7, 8] train loss: 0.453
[7, 12] train loss: 0.384
[7, 16] train loss: 0.404
[7, 20] train loss: 0.430
[7, 24] train loss: 0.404
[7, 28] train loss: 0.366
[7, 32] train loss: 0.380
[7, 36] train loss: 0.358
[7, 40] train loss: 0.376
[7, 44] train loss: 0.394
[7, 48] train loss: 0.414
[7, 52] train loss: 0.477
[7, 56] train loss: 0.395
[7, 60] train loss: 0.413
[7, 64] train loss: 0.413
[7] val loss: 0.505
[8, 4] train loss: 0.516
[8, 8] train loss: 0.454
[8, 12] train loss: 0.373
[8, 16] train loss: 0.397
[8, 20] train loss: 0.423
[8, 24] train loss: 0.399
[8, 28] train loss: 0.357
[8, 32] train loss: 0.361
[8, 36] train loss: 0.350
[8, 40] train loss: 0.366
[8, 44] train loss: 0.390
[8, 48] train loss: 0.411
[8, 52] train loss: 0.474
[8, 56] train loss: 0.384
[8, 60] train loss: 0.389
[8, 64] train loss: 0.395
[8] val loss: 0.495
[9, 4] train loss: 0.497
[9, 8] train loss: 0.424
[9, 12] train loss: 0.365
[9, 16] train loss: 0.379
[9, 20] train loss: 0.419
[9, 24] train loss: 0.396
[9, 28] train loss: 0.338
[9, 32] train loss: 0.348
[9, 36] train loss: 0.349
[9, 40] train loss: 0.350
[9, 44] train loss: 0.379
[9, 48] train loss: 0.405
[9, 52] train loss: 0.460
[9, 56] train loss: 0.375
[9, 60] train loss: 0.379
[9, 64] train loss: 0.392
[9] val loss: 0.491
[10, 4] train loss: 0.491
[10, 8] train loss: 0.420
[10, 12] train loss: 0.362
[10, 16] train loss: 0.375
[10, 20] train loss: 0.408
[10, 24] train loss: 0.394
[10, 28] train loss: 0.337
[10, 32] train loss: 0.339
[10, 36] train loss: 0.340
[10, 40] train loss: 0.348
[10, 44] train loss: 0.375
[10, 48] train loss: 0.401
[10, 52] train loss: 0.462
[10, 56] train loss: 0.379
[10, 60] train loss: 0.385
[10, 64] train loss: 0.397
[10] val loss: 0.528
[11, 4] train loss: 0.505
[11, 8] train loss: 0.436
[11, 12] train loss: 0.372
[11, 16] train loss: 0.382
[11, 20] train loss: 0.411
[11, 24] train loss: 0.398
[11, 28] train loss: 0.341
[11, 32] train loss: 0.336
[11, 36] train loss: 0.335
[11, 40] train loss: 0.340
[11, 44] train loss: 0.361
[11, 48] train loss: 0.392
[11, 52] train loss: 0.442
[11, 56] train loss: 0.357
[11, 60] train loss: 0.360
[11, 64] train loss: 0.377
[11] val loss: 0.517
[12, 4] train loss: 0.483
[12, 8] train loss: 0.413
[12, 12] train loss: 0.355
[12, 16] train loss: 0.368
[12, 20] train loss: 0.407
[12, 24] train loss: 0.397
[12, 28] train loss: 0.333
[12, 32] train loss: 0.327
[12, 36] train loss: 0.332
[12, 40] train loss: 0.335
[12, 44] train loss: 0.351
[12, 48] train loss: 0.376
[12, 52] train loss: 0.431
[12, 56] train loss: 0.344
[12, 60] train loss: 0.363
[12, 64] train loss: 0.375
[12] val loss: 0.505
[13, 4] train loss: 0.463
[13, 8] train loss: 0.390
[13, 12] train loss: 0.357
[13, 16] train loss: 0.349
[13, 20] train loss: 0.399
[13, 24] train loss: 0.399
[13, 28] train loss: 0.333
[13, 32] train loss: 0.326
[13, 36] train loss: 0.325
[13, 40] train loss: 0.330
[13, 44] train loss: 0.349
[13, 48] train loss: 0.381
[13, 52] train loss: 0.427
[13, 56] train loss: 0.351
[13, 60] train loss: 0.371
[13, 64] train loss: 0.382
[13] val loss: 0.544
[14, 4] train loss: 0.469
[14, 8] train loss: 0.410
[14, 12] train loss: 0.383
[14, 16] train loss: 0.364
[14, 20] train loss: 0.394
[14, 24] train loss: 0.395
[14, 28] train loss: 0.334
[14, 32] train loss: 0.331
[14, 36] train loss: 0.321
[14, 40] train loss: 0.332
[14, 44] train loss: 0.335
[14, 48] train loss: 0.373
[14, 52] train loss: 0.416
[14, 56] train loss: 0.324
[14, 60] train loss: 0.354
[14, 64] train loss: 0.355
[14] val loss: 0.527
[15, 4] train loss: 0.472
[15, 8] train loss: 0.378
[15, 12] train loss: 0.365
[15, 16] train loss: 0.359
[15, 20] train loss: 0.371
[15, 24] train loss: 0.379
[15, 28] train loss: 0.321
[15, 32] train loss: 0.310
[15, 36] train loss: 0.314
[15, 40] train loss: 0.323
[15, 44] train loss: 0.335
[15, 48] train loss: 0.370
[15, 52] train loss: 0.418
[15, 56] train loss: 0.324
[15, 60] train loss: 0.365
[15, 64] train loss: 0.359
[15] val loss: 0.522
[16, 4] train loss: 0.462
[16, 8] train loss: 0.380
[16, 12] train loss: 0.366
[16, 16] train loss: 0.338
[16, 20] train loss: 0.374
[16, 24] train loss: 0.391
[16, 28] train loss: 0.327
[16, 32] train loss: 0.322
[16, 36] train loss: 0.302
[16, 40] train loss: 0.315
[16, 44] train loss: 0.328
[16, 48] train loss: 0.370
[16, 52] train loss: 0.407
[16, 56] train loss: 0.314
[16, 60] train loss: 0.341
[16, 64] train loss: 0.346
[16] val loss: 0.494
[17, 4] train loss: 0.437
[17, 8] train loss: 0.360
[17, 12] train loss: 0.351
[17, 16] train loss: 0.347
[17, 20] train loss: 0.368
[17, 24] train loss: 0.382
[17, 28] train loss: 0.319
[17, 32] train loss: 0.318
[17, 36] train loss: 0.303
[17, 40] train loss: 0.310
[17, 44] train loss: 0.328
[17, 48] train loss: 0.361
[17, 52] train loss: 0.399
[17, 56] train loss: 0.320
[17, 60] train loss: 0.361
[17, 64] train loss: 0.362
[17] val loss: 0.508
: Elapsed time for training with 8000 events: 363 sec
PyTorch : [dataset] : Evaluation of PyTorch on training sample (8000 events)
: Elapsed time for evaluation of 8000 events: 5.66 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_PyTorch.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_PyTorch.class.C␛[0m
Factory : Training finished
:
: Ranking input variables (method specific)...
BDT : Ranking result (top variable is best ranked)
: --------------------------------------
: Rank : Variable : Variable Importance
: --------------------------------------
: 1 : vars : 1.092e-02
: 2 : vars : 1.048e-02
: 3 : vars : 1.039e-02
: 4 : vars : 1.038e-02
: 5 : vars : 1.031e-02
: 6 : vars : 1.029e-02
: 7 : vars : 1.029e-02
: 8 : vars : 9.905e-03
: 9 : vars : 9.386e-03
: 10 : vars : 9.374e-03
: 11 : vars : 9.043e-03
: 12 : vars : 8.945e-03
: 13 : vars : 8.781e-03
: 14 : vars : 8.734e-03
: 15 : vars : 8.399e-03
: 16 : vars : 8.394e-03
: 17 : vars : 8.298e-03
: 18 : vars : 8.208e-03
: 19 : vars : 8.082e-03
: 20 : vars : 8.047e-03
: 21 : vars : 8.009e-03
: 22 : vars : 7.984e-03
: 23 : vars : 7.786e-03
: 24 : vars : 7.762e-03
: 25 : vars : 7.649e-03
: 26 : vars : 7.630e-03
: 27 : vars : 7.584e-03
: 28 : vars : 7.575e-03
: 29 : vars : 7.501e-03
: 30 : vars : 7.384e-03
: 31 : vars : 7.361e-03
: 32 : vars : 7.326e-03
: 33 : vars : 7.263e-03
: 34 : vars : 7.228e-03
: 35 : vars : 7.154e-03
: 36 : vars : 7.102e-03
: 37 : vars : 7.049e-03
: 38 : vars : 6.914e-03
: 39 : vars : 6.840e-03
: 40 : vars : 6.832e-03
: 41 : vars : 6.788e-03
: 42 : vars : 6.776e-03
: 43 : vars : 6.693e-03
: 44 : vars : 6.475e-03
: 45 : vars : 6.459e-03
: 46 : vars : 6.390e-03
: 47 : vars : 6.390e-03
: 48 : vars : 6.310e-03
: 49 : vars : 6.258e-03
: 50 : vars : 6.183e-03
: 51 : vars : 6.158e-03
: 52 : vars : 6.151e-03
: 53 : vars : 6.137e-03
: 54 : vars : 6.128e-03
: 55 : vars : 6.109e-03
: 56 : vars : 6.070e-03
: 57 : vars : 6.065e-03
: 58 : vars : 5.950e-03
: 59 : vars : 5.925e-03
: 60 : vars : 5.875e-03
: 61 : vars : 5.845e-03
: 62 : vars : 5.830e-03
: 63 : vars : 5.817e-03
: 64 : vars : 5.771e-03
: 65 : vars : 5.745e-03
: 66 : vars : 5.710e-03
: 67 : vars : 5.705e-03
: 68 : vars : 5.661e-03
: 69 : vars : 5.575e-03
: 70 : vars : 5.539e-03
: 71 : vars : 5.466e-03
: 72 : vars : 5.444e-03
: 73 : vars : 5.439e-03
: 74 : vars : 5.401e-03
: 75 : vars : 5.372e-03
: 76 : vars : 5.291e-03
: 77 : vars : 5.260e-03
: 78 : vars : 5.229e-03
: 79 : vars : 5.215e-03
: 80 : vars : 5.073e-03
: 81 : vars : 5.067e-03
: 82 : vars : 5.040e-03
: 83 : vars : 5.039e-03
: 84 : vars : 4.972e-03
: 85 : vars : 4.958e-03
: 86 : vars : 4.957e-03
: 87 : vars : 4.916e-03
: 88 : vars : 4.911e-03
: 89 : vars : 4.891e-03
: 90 : vars : 4.771e-03
: 91 : vars : 4.742e-03
: 92 : vars : 4.734e-03
: 93 : vars : 4.725e-03
: 94 : vars : 4.713e-03
: 95 : vars : 4.710e-03
: 96 : vars : 4.662e-03
: 97 : vars : 4.561e-03
: 98 : vars : 4.560e-03
: 99 : vars : 4.503e-03
: 100 : vars : 4.482e-03
: 101 : vars : 4.480e-03
: 102 : vars : 4.476e-03
: 103 : vars : 4.448e-03
: 104 : vars : 4.425e-03
: 105 : vars : 4.410e-03
: 106 : vars : 4.405e-03
: 107 : vars : 4.372e-03
: 108 : vars : 4.364e-03
: 109 : vars : 4.356e-03
: 110 : vars : 4.354e-03
: 111 : vars : 4.303e-03
: 112 : vars : 4.299e-03
: 113 : vars : 4.245e-03
: 114 : vars : 4.146e-03
: 115 : vars : 4.137e-03
: 116 : vars : 4.135e-03
: 117 : vars : 4.134e-03
: 118 : vars : 4.101e-03
: 119 : vars : 4.011e-03
: 120 : vars : 4.003e-03
: 121 : vars : 3.999e-03
: 122 : vars : 3.927e-03
: 123 : vars : 3.914e-03
: 124 : vars : 3.902e-03
: 125 : vars : 3.857e-03
: 126 : vars : 3.849e-03
: 127 : vars : 3.849e-03
: 128 : vars : 3.830e-03
: 129 : vars : 3.827e-03
: 130 : vars : 3.823e-03
: 131 : vars : 3.819e-03
: 132 : vars : 3.765e-03
: 133 : vars : 3.758e-03
: 134 : vars : 3.754e-03
: 135 : vars : 3.705e-03
: 136 : vars : 3.685e-03
: 137 : vars : 3.586e-03
: 138 : vars : 3.586e-03
: 139 : vars : 3.577e-03
: 140 : vars : 3.539e-03
: 141 : vars : 3.517e-03
: 142 : vars : 3.515e-03
: 143 : vars : 3.498e-03
: 144 : vars : 3.475e-03
: 145 : vars : 3.396e-03
: 146 : vars : 3.308e-03
: 147 : vars : 3.286e-03
: 148 : vars : 3.280e-03
: 149 : vars : 3.173e-03
: 150 : vars : 3.102e-03
: 151 : vars : 3.098e-03
: 152 : vars : 3.072e-03
: 153 : vars : 2.998e-03
: 154 : vars : 2.953e-03
: 155 : vars : 2.914e-03
: 156 : vars : 2.896e-03
: 157 : vars : 2.883e-03
: 158 : vars : 2.864e-03
: 159 : vars : 2.823e-03
: 160 : vars : 2.784e-03
: 161 : vars : 2.763e-03
: 162 : vars : 2.756e-03
: 163 : vars : 2.712e-03
: 164 : vars : 2.712e-03
: 165 : vars : 2.679e-03
: 166 : vars : 2.675e-03
: 167 : vars : 2.605e-03
: 168 : vars : 2.599e-03
: 169 : vars : 2.590e-03
: 170 : vars : 2.579e-03
: 171 : vars : 2.544e-03
: 172 : vars : 2.499e-03
: 173 : vars : 2.493e-03
: 174 : vars : 2.490e-03
: 175 : vars : 2.421e-03
: 176 : vars : 2.408e-03
: 177 : vars : 2.402e-03
: 178 : vars : 2.396e-03
: 179 : vars : 2.325e-03
: 180 : vars : 2.265e-03
: 181 : vars : 2.226e-03
: 182 : vars : 2.222e-03
: 183 : vars : 2.215e-03
: 184 : vars : 2.198e-03
: 185 : vars : 2.178e-03
: 186 : vars : 2.175e-03
: 187 : vars : 2.169e-03
: 188 : vars : 2.167e-03
: 189 : vars : 2.137e-03
: 190 : vars : 2.063e-03
: 191 : vars : 1.997e-03
: 192 : vars : 1.938e-03
: 193 : vars : 1.866e-03
: 194 : vars : 1.835e-03
: 195 : vars : 1.807e-03
: 196 : vars : 1.804e-03
: 197 : vars : 1.783e-03
: 198 : vars : 1.743e-03
: 199 : vars : 1.701e-03
: 200 : vars : 1.647e-03
: 201 : vars : 1.625e-03
: 202 : vars : 1.617e-03
: 203 : vars : 1.590e-03
: 204 : vars : 1.576e-03
: 205 : vars : 1.575e-03
: 206 : vars : 1.553e-03
: 207 : vars : 1.513e-03
: 208 : vars : 1.452e-03
: 209 : vars : 1.355e-03
: 210 : vars : 1.218e-03
: 211 : vars : 1.136e-03
: 212 : vars : 9.748e-04
: 213 : vars : 9.593e-04
: 214 : vars : 8.988e-04
: 215 : vars : 0.000e+00
: 216 : vars : 0.000e+00
: 217 : vars : 0.000e+00
: 218 : vars : 0.000e+00
: 219 : vars : 0.000e+00
: 220 : vars : 0.000e+00
: 221 : vars : 0.000e+00
: 222 : vars : 0.000e+00
: 223 : vars : 0.000e+00
: 224 : vars : 0.000e+00
: 225 : vars : 0.000e+00
: 226 : vars : 0.000e+00
: 227 : vars : 0.000e+00
: 228 : vars : 0.000e+00
: 229 : vars : 0.000e+00
: 230 : vars : 0.000e+00
: 231 : vars : 0.000e+00
: 232 : vars : 0.000e+00
: 233 : vars : 0.000e+00
: 234 : vars : 0.000e+00
: 235 : vars : 0.000e+00
: 236 : vars : 0.000e+00
: 237 : vars : 0.000e+00
: 238 : vars : 0.000e+00
: 239 : vars : 0.000e+00
: 240 : vars : 0.000e+00
: 241 : vars : 0.000e+00
: 242 : vars : 0.000e+00
: 243 : vars : 0.000e+00
: 244 : vars : 0.000e+00
: 245 : vars : 0.000e+00
: 246 : vars : 0.000e+00
: 247 : vars : 0.000e+00
: 248 : vars : 0.000e+00
: 249 : vars : 0.000e+00
: 250 : vars : 0.000e+00
: 251 : vars : 0.000e+00
: 252 : vars : 0.000e+00
: 253 : vars : 0.000e+00
: 254 : vars : 0.000e+00
: 255 : vars : 0.000e+00
: 256 : vars : 0.000e+00
: --------------------------------------
: No variable ranking supplied by classifier: TMVA_DNN_CPU
: No variable ranking supplied by classifier: TMVA_CNN_CPU
: No variable ranking supplied by classifier: PyKeras
: No variable ranking supplied by classifier: PyTorch
TH1.Print Name = TrainingHistory_TMVA_DNN_CPU_trainingError, Entries= 0, Total sum= 4.59478
TH1.Print Name = TrainingHistory_TMVA_DNN_CPU_valError, Entries= 0, Total sum= 7.26893
TH1.Print Name = TrainingHistory_TMVA_CNN_CPU_trainingError, Entries= 0, Total sum= inf
TH1.Print Name = TrainingHistory_TMVA_CNN_CPU_valError, Entries= 0, Total sum= 9.43903
TH1.Print Name = TrainingHistory_PyKeras_'accuracy', Entries= 0, Total sum= 16.148
TH1.Print Name = TrainingHistory_PyKeras_'loss', Entries= 0, Total sum= 8.12598
TH1.Print Name = TrainingHistory_PyKeras_'val_accuracy', Entries= 0, Total sum= 15.1194
TH1.Print Name = TrainingHistory_PyKeras_'val_loss', Entries= 0, Total sum= 9.74918
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_BDT.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_DNN_CPU.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_CNN_CPU.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_PyKeras.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_PyTorch.weights.xml␛[0m
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: BDT for Classification performance
:
BDT : [dataset] : Evaluation of BDT on testing sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.0417 sec
Factory : Test method: TMVA_DNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TMVA_DNN_CPU : [dataset] : Evaluation of TMVA_DNN_CPU on testing sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.0891 sec
Factory : Test method: TMVA_CNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
TMVA_CNN_CPU : [dataset] : Evaluation of TMVA_CNN_CPU on testing sample (2000 events)
: Elapsed time for evaluation of 2000 events: 0.622 sec
Factory : Test method: PyKeras for Classification performance
:
: Setting up tf.keras
: Using TensorFlow version 2
: Use Keras version from TensorFlow : tf.keras
: Applying GPU option: gpu_options.allow_growth=True
: Loading Keras Model
: Loaded model from file: trained_model_cnn.h5
PyKeras : [dataset] : Evaluation of PyKeras on testing sample (2000 events)
[18, 4] train loss: 0.444
[18, 8] train loss: 0.358
[18, 12] train loss: 0.352
[18, 16] train loss: 0.338
[18, 20] train loss: 0.375
[18, 24] train loss: 0.389
[18, 28] train loss: 0.327
[18, 32] train loss: 0.327
[18, 36] train loss: 0.300
[18, 40] train loss: 0.329
[18, 44] train loss: 0.335
[18, 48] train loss: 0.370
[18, 52] train loss: 0.408
[18, 56] train loss: 0.338
[18, 60] train loss: 0.365
[18, 64] train loss: 0.366
[18] val loss: 0.462
[19, 4] train loss: 0.397
[19, 8] train loss: 0.369
[19, 12] train loss: 0.377
[19, 16] train loss: 0.358
[19, 20] train loss: 0.440
[19, 24] train loss: 0.418
[19, 28] train loss: 0.352
[19, 32] train loss: 0.333
[19, 36] train loss: 0.327
[19, 40] train loss: 0.355
[19, 44] train loss: 0.356
[19, 48] train loss: 0.406
[19, 52] train loss: 0.420
[19, 56] train loss: 0.339
[19, 60] train loss: 0.322
[19, 64] train loss: 0.309
[19] val loss: 0.453
[20, 4] train loss: 0.338
[20, 8] train loss: 0.332
[20, 12] train loss: 0.310
[20, 16] train loss: 0.322
[20, 20] train loss: 0.395
[20, 24] train loss: 0.403
[20, 28] train loss: 0.322
[20, 32] train loss: 0.314
[20, 36] train loss: 0.307
[20, 40] train loss: 0.325
[20, 44] train loss: 0.317
[20, 48] train loss: 0.364
[20, 52] train loss: 0.386
[20, 56] train loss: 0.308
[20, 60] train loss: 0.300
[20, 64] train loss: 0.302
[20] val loss: 0.448
Finished Training on 20 Epochs!
1/63 [..............................] - ETA: 7s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
15/63 [======>.......................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
29/63 [============>.................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
44/63 [===================>..........] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
60/63 [===========================>..] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
63/63 [==============================] - 0s 3ms/step
: Elapsed time for evaluation of 2000 events: 0.429 sec
Factory : Test method: PyTorch for Classification performance
:
: Setup PyTorch Model
: Executing user initialization code from /home/sftnight/build/workspace/root-makedoc-v626/rootspi/rdoc/src/v6-26-00-patches.build/tutorials/tmva/PyTorch_Generate_CNN_Model.py
: Loaded pytorch train function:
: Loaded pytorch optimizer:
: Loaded pytorch loss function:
: Loaded pytorch predict function:
: Load model from file: PyTorchTrainedModelCNN.pt
PyTorch : [dataset] : Evaluation of PyTorch on testing sample (2000 events)
: Elapsed time for evaluation of 2000 events: 1.39 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: BDT
:
BDT : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
Factory : Evaluate classifier: TMVA_DNN_CPU
:
TMVA_DNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
Factory : Evaluate classifier: TMVA_CNN_CPU
:
TMVA_CNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
Factory : Evaluate classifier: PyKeras
:
PyKeras : [dataset] : Loop over test events and fill histograms with classifier response...
:
custom objects for loading model : {'optimizer': <class 'torch.optim.adam.Adam'>, 'criterion': BCELoss(), 'train_func': <function fit at 0x7f35b0367ca0>, 'predict_func': <function predict at 0x7f35b080b430>}
1/250 [..............................] - ETA: 5s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
16/250 [>.............................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
31/250 [==>...........................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
45/250 [====>.........................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
59/250 [======>.......................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
73/250 [=======>......................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
89/250 [=========>....................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
104/250 [===========>..................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
119/250 [=============>................] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
134/250 [===============>..............] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
149/250 [================>.............] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
164/250 [==================>...........] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
180/250 [====================>.........] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
195/250 [======================>.......] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
211/250 [========================>.....] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
228/250 [==========================>...] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
244/250 [============================>.] - ETA: 0s␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈␈
250/250 [==============================] - 1s 3ms/step
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
Factory : Evaluate classifier: PyTorch
:
PyTorch : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset PyKeras : 0.917
: dataset TMVA_CNN_CPU : 0.909
: dataset PyTorch : 0.902
: dataset TMVA_DNN_CPU : 0.886
: dataset BDT : 0.844
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset PyKeras : 0.355 (0.437) 0.742 (0.827) 0.934 (0.951)
: dataset TMVA_CNN_CPU : 0.305 (0.388) 0.737 (0.802) 0.911 (0.930)
: dataset PyTorch : 0.195 (0.362) 0.711 (0.803) 0.910 (0.931)
: dataset TMVA_DNN_CPU : 0.205 (0.340) 0.640 (0.745) 0.893 (0.925)
: dataset BDT : 0.195 (0.307) 0.555 (0.657) 0.807 (0.892)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 2000 events
:
Dataset:dataset : Created tree 'TrainTree' with 8000 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m
/***
# TMVA Classification Example Using a Convolutional Neural Network
**/
/// Helper function to create input images data
/// we create a signal and background 2D histograms from 2d gaussians
/// with a location (means in X and Y) different for each event
/// The difference between signal and background is in the gaussian width.
/// The width for the background gaussian is slightly larger than the signal width by few % values
///
///
void MakeImagesTree(int n, int nh, int nw)
{
// image size (nh x nw)
const int ntot = nh * nw;
const TString fileOutName = TString::Format("images_data_%dx%d.root", nh, nw);
const int nRndmEvts = 10000; // number of events we use to fill each image
double delta_sigma = 0.1; // 5% difference in the sigma
double pixelNoise = 5;
double sX1 = 3;
double sY1 = 3;
double sX2 = sX1 + delta_sigma;
double sY2 = sY1 - delta_sigma;
auto h1 = new TH2D("h1", "h1", nh, 0, 10, nw, 0, 10);
auto h2 = new TH2D("h2", "h2", nh, 0, 10, nw, 0, 10);
auto f1 = new TF2("f1", "xygaus");
auto f2 = new TF2("f2", "xygaus");
TTree sgn("sig_tree", "signal_tree");
TTree bkg("bkg_tree", "background_tree");
TFile f(fileOutName, "RECREATE");
std::vector<float> x1(ntot);
std::vector<float> x2(ntot);
// create signal and background trees with a single branch
// an std::vector<float> of size nh x nw containing the image data
std::vector<float> *px1 = &x1;
std::vector<float> *px2 = &x2;
bkg.Branch("vars", "std::vector<float>", &px1);
sgn.Branch("vars", "std::vector<float>", &px2);
// std::cout << "create tree " << std::endl;
sgn.SetDirectory(&f);
bkg.SetDirectory(&f);
f1->SetParameters(1, 5, sX1, 5, sY1);
f2->SetParameters(1, 5, sX2, 5, sY2);
std::cout << "Filling ROOT tree " << std::endl;
for (int i = 0; i < n; ++i) {
if (i % 1000 == 0)
std::cout << "Generating image event ... " << i << std::endl;
h1->Reset();
h2->Reset();
// generate random means in range [3,7] to be not too much on the border
f2->SetParameter(1, gRandom->Uniform(3, 7));
f2->SetParameter(3, gRandom->Uniform(3, 7));
h1->FillRandom("f1", nRndmEvts);
h2->FillRandom("f2", nRndmEvts);
for (int k = 0; k < nh; ++k) {
for (int l = 0; l < nw; ++l) {
int m = k * nw + l;
// add some noise in each bin
x1[m] = h1->GetBinContent(k + 1, l + 1) + gRandom->Gaus(0, pixelNoise);
x2[m] = h2->GetBinContent(k + 1, l + 1) + gRandom->Gaus(0, pixelNoise);
}
}
sgn.Fill();
bkg.Fill();
}
sgn.Write();
bkg.Write();
Info("MakeImagesTree", "Signal and background tree with images data written to the file %s", f.GetName());
sgn.Print();
bkg.Print();
f.Close();
}
void TMVA_CNN_Classification(std::vector<bool> opt = {1, 1, 1, 1, 1})
{
bool useTMVACNN = (opt.size() > 0) ? opt[0] : false;
bool useKerasCNN = (opt.size() > 1) ? opt[1] : false;
bool useTMVADNN = (opt.size() > 2) ? opt[2] : false;
bool useTMVABDT = (opt.size() > 3) ? opt[3] : false;
bool usePyTorchCNN = (opt.size() > 4) ? opt[4] : false;
#ifndef R__HAS_TMVACPU
#ifndef R__HAS_TMVAGPU
Warning("TMVA_CNN_Classification",
"TMVA is not build with GPU or CPU multi-thread support. Cannot use TMVA Deep Learning for CNN");
useTMVACNN = false;
#endif
#endif
bool writeOutputFile = true;
int num_threads = 0; // use default threads
// do enable MT running
if (num_threads >= 0) {
ROOT::EnableImplicitMT(num_threads);
if (num_threads > 0) gSystem->Setenv("OMP_NUM_THREADS", TString::Format("%d",num_threads));
}
else
gSystem->Setenv("OMP_NUM_THREADS", "1");
std::cout << "Running with nthreads = " << ROOT::GetThreadPoolSize() << std::endl;
#ifdef R__HAS_PYMVA
gSystem->Setenv("KERAS_BACKEND", "tensorflow");
// for using Keras
#else
useKerasCNN = false;
usePyTorchCNN = false;
#endif
TFile *outputFile = nullptr;
if (writeOutputFile)
outputFile = TFile::Open("TMVA_CNN_ClassificationOutput.root", "RECREATE");
/***
## Create TMVA Factory
Create the Factory class. Later you can choose the methods
whose performance you'd like to investigate.
The factory is the major TMVA object you have to interact with. Here is the list of parameters you need to pass
- The first argument is the base of the name of all the output
weightfiles in the directory weight/ that will be created with the
method parameters
- The second argument is the output file for the training results
- The third argument is a string option defining some general configuration for the TMVA session.
For example all TMVA output can be suppressed by removing the "!" (not) in front of the "Silent" argument in the
option string
- note that we disable any pre-transformation of the input variables and we avoid computing correlations between
input variables
***/
TMVA::Factory factory(
"TMVA_CNN_Classification", outputFile,
"!V:ROC:!Silent:Color:AnalysisType=Classification:Transformations=None:!Correlations");
/***
## Declare DataLoader(s)
The next step is to declare the DataLoader class that deals with input variables
Define the input variables that shall be used for the MVA training
note that you may also use variable expressions, which can be parsed by TTree::Draw( "expression" )]
In this case the input data consists of an image of 16x16 pixels. Each single pixel is a branch in a ROOT TTree
**/
TMVA::DataLoader *loader = new TMVA::DataLoader("dataset");
/***
## Setup Dataset(s)
Define input data file and signal and background trees
**/
int imgSize = 16 * 16;
TString inputFileName = "images_data_16x16.root";
bool fileExist = !gSystem->AccessPathName(inputFileName);
// if file does not exists create it
if (!fileExist) {
MakeImagesTree(5000, 16, 16);
}
// TString inputFileName = "tmva_class_example.root";
auto inputFile = TFile::Open(inputFileName);
if (!inputFile) {
Error("TMVA_CNN_Classification", "Error opening input file %s - exit", inputFileName.Data());
return;
}
// --- Register the training and test trees
TTree *signalTree = (TTree *)inputFile->Get("sig_tree");
TTree *backgroundTree = (TTree *)inputFile->Get("bkg_tree");
int nEventsSig = signalTree->GetEntries();
int nEventsBkg = backgroundTree->GetEntries();
// global event weights per tree (see below for setting event-wise weights)
Double_t signalWeight = 1.0;
Double_t backgroundWeight = 1.0;
// You can add an arbitrary number of signal or background trees
loader->AddSignalTree(signalTree, signalWeight);
loader->AddBackgroundTree(backgroundTree, backgroundWeight);
/// add event variables (image)
/// use new method (from ROOT 6.20 to add a variable array for all image data)
loader->AddVariablesArray("vars", imgSize);
// Set individual event weights (the variables must exist in the original TTree)
// for signal : factory->SetSignalWeightExpression ("weight1*weight2");
// for background: factory->SetBackgroundWeightExpression("weight1*weight2");
// loader->SetBackgroundWeightExpression( "weight" );
// Apply additional cuts on the signal and background samples (can be different)
TCut mycuts = ""; // for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1";
TCut mycutb = ""; // for example: TCut mycutb = "abs(var1)<0.5";
// Tell the factory how to use the training and testing events
//
// If no numbers of events are given, half of the events in the tree are used
// for training, and the other half for testing:
// loader->PrepareTrainingAndTestTree( mycut, "SplitMode=random:!V" );
// It is possible also to specify the number of training and testing events,
// note we disable the computation of the correlation matrix of the input variables
int nTrainSig = 0.8 * nEventsSig;
int nTrainBkg = 0.8 * nEventsBkg;
// build the string options for DataLoader::PrepareTrainingAndTestTree
TString prepareOptions = TString::Format(
"nTrain_Signal=%d:nTrain_Background=%d:SplitMode=Random:SplitSeed=100:NormMode=NumEvents:!V:!CalcCorrelations",
nTrainSig, nTrainBkg);
loader->PrepareTrainingAndTestTree(mycuts, mycutb, prepareOptions);
/***
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sig_tree of type Signal with 10000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg_tree of type Background with 10000 events
**/
// signalTree->Print();
/****
# Booking Methods
Here we book the TMVA methods. We book a Boosted Decision Tree method (BDT)
**/
// Boosted Decision Trees
if (useTMVABDT) {
factory.BookMethod(loader, TMVA::Types::kBDT, "BDT",
"!V:NTrees=400:MinNodeSize=2.5%:MaxDepth=2:BoostType=AdaBoost:AdaBoostBeta=0.5:"
"UseBaggedBoost:BaggedSampleFraction=0.5:SeparationType=GiniIndex:nCuts=20");
}
/**
#### Booking Deep Neural Network
Here we book the DNN of TMVA. See the example TMVA_Higgs_Classification.C for a detailed description of the
options
**/
if (useTMVADNN) {
TString layoutString(
"Layout=DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR");
// Training strategies
// one can catenate several training strings with different parameters (e.g. learning rates or regularizations
// parameters) The training string must be concatenates with the `|` delimiter
TString trainingString1("LearningRate=1e-3,Momentum=0.9,Repetitions=1,"
"ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,"
"MaxEpochs=20,WeightDecay=1e-4,Regularization=None,"
"Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.");
TString trainingStrategyString("TrainingStrategy=");
trainingStrategyString += trainingString1; // + "|" + trainingString2 + ....
// Build now the full DNN Option string
TString dnnOptions("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:"
"WeightInitialization=XAVIER");
dnnOptions.Append(":");
dnnOptions.Append(layoutString);
dnnOptions.Append(":");
dnnOptions.Append(trainingStrategyString);
TString dnnMethodName = "TMVA_DNN_CPU";
// use GPU if available
#ifdef R__HAS_TMVAGPU
dnnOptions += ":Architecture=GPU";
dnnMethodName = "TMVA_DNN_GPU";
#elif defined(R__HAS_TMVACPU)
dnnOptions += ":Architecture=CPU";
#endif
factory.BookMethod(loader, TMVA::Types::kDL, dnnMethodName, dnnOptions);
}
/***
### Book Convolutional Neural Network in TMVA
For building a CNN one needs to define
- Input Layout : number of channels (in this case = 1) | image height | image width
- Batch Layout : batch size | number of channels | image size = (height*width)
Then one add Convolutional layers and MaxPool layers.
- For Convolutional layer the option string has to be:
- CONV | number of units | filter height | filter width | stride height | stride width | padding height | paddig
width | activation function
- note in this case we are using a filer 3x3 and padding=1 and stride=1 so we get the output dimension of the
conv layer equal to the input
- note we use after the first convolutional layer a batch normalization layer. This seems to help significantly the
convergence
- For the MaxPool layer:
- MAXPOOL | pool height | pool width | stride height | stride width
The RESHAPE layer is needed to flatten the output before the Dense layer
Note that to run the CNN is required to have CPU or GPU support
***/
if (useTMVACNN) {
TString inputLayoutString("InputLayout=1|16|16");
// Batch Layout
TString layoutString("Layout=CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,"
"RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR");
// Training strategies.
TString trainingString1("LearningRate=1e-3,Momentum=0.9,Repetitions=1,"
"ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,"
"MaxEpochs=20,WeightDecay=1e-4,Regularization=None,"
"Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0");
TString trainingStrategyString("TrainingStrategy=");
trainingStrategyString +=
trainingString1; // + "|" + trainingString2 + "|" + trainingString3; for concatenating more training strings
// Build full CNN Options.
TString cnnOptions("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:"
"WeightInitialization=XAVIER");
cnnOptions.Append(":");
cnnOptions.Append(inputLayoutString);
cnnOptions.Append(":");
cnnOptions.Append(layoutString);
cnnOptions.Append(":");
cnnOptions.Append(trainingStrategyString);
//// New DL (CNN)
TString cnnMethodName = "TMVA_CNN_CPU";
// use GPU if available
#ifdef R__HAS_TMVAGPU
cnnOptions += ":Architecture=GPU";
cnnMethodName = "TMVA_CNN_GPU";
#else
cnnOptions += ":Architecture=CPU";
cnnMethodName = "TMVA_CNN_CPU";
#endif
factory.BookMethod(loader, TMVA::Types::kDL, cnnMethodName, cnnOptions);
}
/**
### Book Convolutional Neural Network in Keras using a generated model
**/
if (useKerasCNN) {
Info("TMVA_CNN_Classification", "Building convolutional keras model");
// create python script which can be executed
// create 2 conv2d layer + maxpool + dense
m.AddLine("import tensorflow");
m.AddLine("from tensorflow.keras.models import Sequential");
m.AddLine("from tensorflow.keras.optimizers import Adam");
m.AddLine(
"from tensorflow.keras.layers import Input, Dense, Dropout, Flatten, Conv2D, MaxPooling2D, Reshape, BatchNormalization");
m.AddLine("");
m.AddLine("model = Sequential() ");
m.AddLine("model.add(Reshape((16, 16, 1), input_shape = (256, )))");
m.AddLine("model.add(Conv2D(10, kernel_size = (3, 3), kernel_initializer = 'glorot_normal',activation = "
"'relu', padding = 'same'))");
m.AddLine("model.add(BatchNormalization())");
m.AddLine("model.add(Conv2D(10, kernel_size = (3, 3), kernel_initializer = 'glorot_normal',activation = "
"'relu', padding = 'same'))");
// m.AddLine("model.add(BatchNormalization())");
m.AddLine("model.add(MaxPooling2D(pool_size = (2, 2), strides = (1,1))) ");
m.AddLine("model.add(Flatten())");
m.AddLine("model.add(Dense(256, activation = 'relu')) ");
m.AddLine("model.add(Dense(2, activation = 'sigmoid')) ");
m.AddLine("model.compile(loss = 'binary_crossentropy', optimizer = Adam(lr = 0.001), metrics = ['accuracy'])");
m.AddLine("model.save('model_cnn.h5')");
m.AddLine("model.summary()");
m.SaveSource("make_cnn_model.py");
// execute
gSystem->Exec(TMVA::Python_Executable() + " make_cnn_model.py");
if (gSystem->AccessPathName("model_cnn.h5")) {
Warning("TMVA_CNN_Classification", "Error creating Keras model file - skip using Keras");
} else {
// book PyKeras method only if Keras model could be created
Info("TMVA_CNN_Classification", "Booking tf.Keras CNN model");
factory.BookMethod(
loader, TMVA::Types::kPyKeras, "PyKeras",
"H:!V:VarTransform=None:FilenameModel=model_cnn.h5:tf.keras:"
"FilenameTrainedModel=trained_model_cnn.h5:NumEpochs=20:BatchSize=100:"
"GpuOptions=allow_growth=True"); // needed for RTX NVidia card and to avoid TF allocates all GPU memory
}
}
if (usePyTorchCNN) {
Info("TMVA_CNN_Classification", "Using Convolutional PyTorch Model");
TString pyTorchFileName = gROOT->GetTutorialDir() + TString("/tmva/PyTorch_Generate_CNN_Model.py");
// check that pytorch can be imported and file defining the model and used later when booking the method is existing
if (gSystem->Exec(TMVA::Python_Executable() + " -c 'import torch'") || gSystem->AccessPathName(pyTorchFileName) ) {
Warning("TMVA_CNN_Classification", "PyTorch is not installed or model building file is not existing - skip using PyTorch");
}
else {
// book PyTorch method only if PyTorch model could be created
Info("TMVA_CNN_Classification", "Booking PyTorch CNN model");
TString methodOpt = "H:!V:VarTransform=None:FilenameModel=PyTorchModelCNN.pt:"
"FilenameTrainedModel=PyTorchTrainedModelCNN.pt:NumEpochs=20:BatchSize=100";
methodOpt += TString(":UserCode=") + pyTorchFileName;
factory.BookMethod(loader, TMVA::Types::kPyTorch, "PyTorch", methodOpt);
}
}
//// ## Train Methods
factory.TrainAllMethods();
/// ## Test and Evaluate Methods
factory.TestAllMethods();
factory.EvaluateAllMethods();
/// ## Plot ROC Curve
auto c1 = factory.GetROCCurve(loader);
c1->Draw();
// close outputfile to save output file
outputFile->Close();
}
#define f(i)
Definition RSha256.hxx:104
static const double x2[5]
static const double x1[5]
double Double_t
Definition RtypesCore.h:59
void Info(const char *location, const char *msgfmt,...)
Use this function for informational messages.
Definition TError.cxx:220
void Error(const char *location, const char *msgfmt,...)
Use this function in case an error occurred.
Definition TError.cxx:187
void Warning(const char *location, const char *msgfmt,...)
Use this function in warning situations.
Definition TError.cxx:231
#define gROOT
Definition TROOT.h:404
R__EXTERN TRandom * gRandom
Definition TRandom.h:62
R__EXTERN TSystem * gSystem
Definition TSystem.h:559
A specialized string object used for TTree selections.
Definition TCut.h:25
virtual void SetParameters(const Double_t *params)
Definition TF1.h:644
virtual void SetParameter(Int_t param, Double_t value)
Definition TF1.h:634
A 2-Dim function with parameters.
Definition TF2.h:29
A ROOT file is a suite of consecutive data records (TKey instances) with a well defined format.
Definition TFile.h:54
static TFile * Open(const char *name, Option_t *option="", const char *ftitle="", Int_t compress=ROOT::RCompressionSetting::EDefaults::kUseCompiledDefault, Int_t netopt=0)
Create / open a file.
Definition TFile.cxx:4025
void Close(Option_t *option="") override
Close a file.
Definition TFile.cxx:899
virtual void Reset(Option_t *option="")
Reset.
Definition TH1.cxx:9941
virtual void FillRandom(const char *fname, Int_t ntimes=5000, TRandom *rng=nullptr)
Fill histogram following distribution in function fname.
Definition TH1.cxx:3526
virtual Double_t GetBinContent(Int_t bin) const
Return content of bin number bin.
Definition TH1.cxx:4994
2-D histogram with a double per channel (see TH1 documentation)}
Definition TH2.h:292
void AddVariablesArray(const TString &expression, int size, char type='F', Double_t min=0, Double_t max=0)
user inserts discriminating array of variables in data set info in case input tree provides an array ...
void AddSignalTree(TTree *signal, Double_t weight=1.0, Types::ETreeType treetype=Types::kMaxTreeType)
number of signal events (used to compute significance)
void PrepareTrainingAndTestTree(const TCut &cut, const TString &splitOpt)
prepare the training and test trees -> same cuts for signal and background
void AddBackgroundTree(TTree *background, Double_t weight=1.0, Types::ETreeType treetype=Types::kMaxTreeType)
number of signal events (used to compute significance)
This is the main MVA steering class.
Definition Factory.h:80
static void PyInitialize()
Initialize Python interpreter.
static Tools & Instance()
Definition Tools.cxx:71
Class supporting a collection of lines with C++ code.
Definition TMacro.h:31
virtual TObjString * AddLine(const char *text)
Add line with text in the list of lines of this macro.
Definition TMacro.cxx:141
virtual Double_t Gaus(Double_t mean=0, Double_t sigma=1)
Samples a random number from the standard Normal (Gaussian) Distribution with the given mean and sigm...
Definition TRandom.cxx:274
virtual void SetSeed(ULong_t seed=0)
Set the random generator seed.
Definition TRandom.cxx:608
virtual Double_t Uniform(Double_t x1=1)
Returns a uniform deviate on the interval (0, x1).
Definition TRandom.cxx:672
Basic string class.
Definition TString.h:136
const char * Data() const
Definition TString.h:369
static TString Format(const char *fmt,...)
Static method which formats a string using a printf style format descriptor and return a TString.
Definition TString.cxx:2336
virtual Int_t Exec(const char *shellcmd)
Execute a command.
Definition TSystem.cxx:656
virtual Bool_t AccessPathName(const char *path, EAccessMode mode=kFileExists)
Returns FALSE if one can access a file using the specified access mode.
Definition TSystem.cxx:1296
virtual void Setenv(const char *name, const char *value)
Set environment variable.
Definition TSystem.cxx:1647
A TTree represents a columnar dataset.
Definition TTree.h:79
virtual Long64_t GetEntries() const
Definition TTree.h:460
return c1
Definition legend1.C:41
const Int_t n
Definition legend1.C:16
TH1F * h1
Definition legend1.C:5
TF1 * f1
Definition legend1.C:11
void EnableImplicitMT(UInt_t numthreads=0)
Enable ROOT's implicit multi-threading for all objects and methods that provide an internal paralleli...
Definition TROOT.cxx:527
UInt_t GetThreadPoolSize()
Returns the size of ROOT's thread pool.
Definition TROOT.cxx:565
TString Python_Executable()
Function to find current Python executable used by ROOT If Python2 is installed return "python" Inste...
auto * m
Definition textangle.C:8
auto * l
Definition textangle.C:4
Author
Lorenzo Moneta

Definition in file TMVA_CNN_Classification.C.