Adadelta Optimizer class.
This class represents the Adadelta Optimizer.
Definition at line 44 of file Adadelta.h.
Public Types | |
using | Matrix_t = typename Architecture_t::Matrix_t |
using | Scalar_t = typename Architecture_t::Scalar_t |
Public Types inherited from TMVA::DNN::VOptimizer< Architecture_t, VGeneralLayer< Architecture_t >, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > > | |
using | Matrix_t = typename Architecture_t::Matrix_t |
using | Scalar_t = typename Architecture_t::Scalar_t |
Public Member Functions | |
TAdadelta (DeepNet_t &deepNet, Scalar_t learningRate=1.0, Scalar_t rho=0.95, Scalar_t epsilon=1e-8) | |
Constructor. More... | |
~TAdadelta ()=default | |
Destructor. More... | |
Scalar_t | GetEpsilon () const |
std::vector< std::vector< Matrix_t > > & | GetPastSquaredBiasGradients () |
std::vector< Matrix_t > & | GetPastSquaredBiasGradientsAt (size_t i) |
std::vector< std::vector< Matrix_t > > & | GetPastSquaredBiasUpdates () |
std::vector< Matrix_t > & | GetPastSquaredBiasUpdatesAt (size_t i) |
std::vector< std::vector< Matrix_t > > & | GetPastSquaredWeightGradients () |
std::vector< Matrix_t > & | GetPastSquaredWeightGradientsAt (size_t i) |
std::vector< std::vector< Matrix_t > > & | GetPastSquaredWeightUpdates () |
std::vector< Matrix_t > & | GetPastSquaredWeightUpdatesAt (size_t i) |
Scalar_t | GetRho () const |
Getters. More... | |
Public Member Functions inherited from TMVA::DNN::VOptimizer< Architecture_t, VGeneralLayer< Architecture_t >, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > > | |
VOptimizer (Scalar_t learningRate, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > &deepNet) | |
Constructor. More... | |
virtual | ~VOptimizer ()=default |
Virtual Destructor. More... | |
size_t | GetGlobalStep () const |
VGeneralLayer< Architecture_t > * | GetLayerAt (size_t i) |
std::vector< VGeneralLayer< Architecture_t > * > & | GetLayers () |
Scalar_t | GetLearningRate () const |
Getters. More... | |
void | IncrementGlobalStep () |
Increments the global step. More... | |
void | SetLearningRate (size_t learningRate) |
Setters. More... | |
void | Step () |
Performs one step of optimization. More... | |
Protected Member Functions | |
void | UpdateBiases (size_t layerIndex, std::vector< Matrix_t > &biases, const std::vector< Matrix_t > &biasGradients) |
Update the biases, given the current bias gradients. More... | |
void | UpdateWeights (size_t layerIndex, std::vector< Matrix_t > &weights, const std::vector< Matrix_t > &weightGradients) |
Update the weights, given the current weight gradients. More... | |
virtual void | UpdateBiases (size_t layerIndex, std::vector< Matrix_t > &biases, const std::vector< Matrix_t > &biasGradients)=0 |
Update the biases, given the current bias gradients. More... | |
virtual void | UpdateWeights (size_t layerIndex, std::vector< Matrix_t > &weights, const std::vector< Matrix_t > &weightGradients)=0 |
Update the weights, given the current weight gradients. More... | |
Protected Attributes | |
Scalar_t | fEpsilon |
The Smoothing term used to avoid division by zero. More... | |
std::vector< std::vector< Matrix_t > > | fPastSquaredBiasGradients |
The accumulation of the square of the past bias gradients associated with the deep net. More... | |
std::vector< std::vector< Matrix_t > > | fPastSquaredBiasUpdates |
The accumulation of the square of the past bias updates associated with the deep net. More... | |
std::vector< std::vector< Matrix_t > > | fPastSquaredWeightGradients |
The accumulation of the square of the past weight gradients associated with the deep net. More... | |
std::vector< std::vector< Matrix_t > > | fPastSquaredWeightUpdates |
The accumulation of the square of the past weight updates associated with the deep net. More... | |
Scalar_t | fRho |
The Rho constant used by the optimizer. More... | |
std::vector< std::vector< Matrix_t > > | fWorkBiasTensor1 |
working tensor used to keep a temporary copy of bias or bias gradients More... | |
std::vector< std::vector< Matrix_t > > | fWorkBiasTensor2 |
working tensor used to keep a temporary copy of bias or bias gradients More... | |
std::vector< std::vector< Matrix_t > > | fWorkWeightTensor1 |
working tensor used to keep a temporary copy of weights or weight gradients More... | |
std::vector< std::vector< Matrix_t > > | fWorkWeightTensor2 |
working tensor used to keep a temporary copy of weights or weight gradients More... | |
Protected Attributes inherited from TMVA::DNN::VOptimizer< Architecture_t, VGeneralLayer< Architecture_t >, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > > | |
TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > & | fDeepNet |
The reference to the deep net. More... | |
size_t | fGlobalStep |
The current global step count during training. More... | |
Scalar_t | fLearningRate |
The learning rate used for training. More... | |
#include <TMVA/DNN/Adadelta.h>
using TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::Matrix_t = typename Architecture_t::Matrix_t |
Definition at line 46 of file Adadelta.h.
using TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::Scalar_t = typename Architecture_t::Scalar_t |
Definition at line 47 of file Adadelta.h.
TMVA::DNN::TAdadelta< Architecture_t, Layer_t, DeepNet_t >::TAdadelta | ( | DeepNet_t & | deepNet, |
Scalar_t | learningRate = 1.0 , |
||
Scalar_t | rho = 0.95 , |
||
Scalar_t | epsilon = 1e-8 |
||
) |
Constructor.
Definition at line 101 of file Adadelta.h.
|
default |
Destructor.
|
inline |
Definition at line 81 of file Adadelta.h.
|
inline |
Definition at line 86 of file Adadelta.h.
|
inline |
Definition at line 87 of file Adadelta.h.
|
inline |
Definition at line 92 of file Adadelta.h.
|
inline |
Definition at line 93 of file Adadelta.h.
|
inline |
Definition at line 83 of file Adadelta.h.
|
inline |
Definition at line 84 of file Adadelta.h.
|
inline |
Definition at line 89 of file Adadelta.h.
|
inline |
Definition at line 90 of file Adadelta.h.
|
inline |
Getters.
Definition at line 80 of file Adadelta.h.
|
protectedvirtual |
Update the biases, given the current bias gradients.
Definition at line 205 of file Adadelta.h.
|
protectedvirtual |
Update the weights, given the current weight gradients.
Definition at line 146 of file Adadelta.h.
|
protected |
The Smoothing term used to avoid division by zero.
Definition at line 51 of file Adadelta.h.
|
protected |
The accumulation of the square of the past bias gradients associated with the deep net.
Definition at line 54 of file Adadelta.h.
|
protected |
The accumulation of the square of the past bias updates associated with the deep net.
Definition at line 59 of file Adadelta.h.
|
protected |
The accumulation of the square of the past weight gradients associated with the deep net.
Definition at line 52 of file Adadelta.h.
|
protected |
The accumulation of the square of the past weight updates associated with the deep net.
Definition at line 57 of file Adadelta.h.
|
protected |
The Rho constant used by the optimizer.
Definition at line 50 of file Adadelta.h.
|
protected |
working tensor used to keep a temporary copy of bias or bias gradients
Definition at line 62 of file Adadelta.h.
|
protected |
working tensor used to keep a temporary copy of bias or bias gradients
Definition at line 64 of file Adadelta.h.
|
protected |
working tensor used to keep a temporary copy of weights or weight gradients
Definition at line 61 of file Adadelta.h.
|
protected |
working tensor used to keep a temporary copy of weights or weight gradients
Definition at line 63 of file Adadelta.h.