Adam Optimizer class.
This class represents the Adam Optimizer.
Public Types | |
| using | Matrix_t = typename Architecture_t::Matrix_t |
| using | Scalar_t = typename Architecture_t::Scalar_t |
Public Types inherited from TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t > | |
| using | Matrix_t = typename Architecture_t::Matrix_t |
| using | Scalar_t = typename Architecture_t::Scalar_t |
Public Member Functions | |
| TAdam (DeepNet_t &deepNet, Scalar_t learningRate=0.001, Scalar_t beta1=0.9, Scalar_t beta2=0.999, Scalar_t epsilon=1e-7) | |
| Constructor. | |
| ~TAdam ()=default | |
| Destructor. | |
| Scalar_t | GetBeta1 () const |
| Getters. | |
| Scalar_t | GetBeta2 () const |
| Scalar_t | GetEpsilon () const |
| std::vector< std::vector< Matrix_t > > & | GetFirstMomentBiases () |
| std::vector< Matrix_t > & | GetFirstMomentBiasesAt (size_t i) |
| std::vector< std::vector< Matrix_t > > & | GetFirstMomentWeights () |
| std::vector< Matrix_t > & | GetFirstMomentWeightsAt (size_t i) |
| std::vector< std::vector< Matrix_t > > & | GetSecondMomentBiases () |
| std::vector< Matrix_t > & | GetSecondMomentBiasesAt (size_t i) |
| std::vector< std::vector< Matrix_t > > & | GetSecondMomentWeights () |
| std::vector< Matrix_t > & | GetSecondMomentWeightsAt (size_t i) |
Public Member Functions inherited from TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t > | |
| VOptimizer (Scalar_t learningRate, DeepNet_t &deepNet) | |
| Constructor. | |
| virtual | ~VOptimizer ()=default |
| Virtual Destructor. | |
| size_t | GetGlobalStep () const |
| Layer_t * | GetLayerAt (size_t i) |
| std::vector< Layer_t * > & | GetLayers () |
| Scalar_t | GetLearningRate () const |
| Getters. | |
| void | IncrementGlobalStep () |
| Increments the global step. | |
| void | SetLearningRate (size_t learningRate) |
| Setters. | |
| void | Step () |
| Performs one step of optimization. | |
Protected Member Functions | |
| void | UpdateBiases (size_t layerIndex, std::vector< Matrix_t > &biases, const std::vector< Matrix_t > &biasGradients) override |
| Update the biases, given the current bias gradients. | |
| void | UpdateWeights (size_t layerIndex, std::vector< Matrix_t > &weights, const std::vector< Matrix_t > &weightGradients) override |
| Update the weights, given the current weight gradients. | |
Protected Attributes | |
| Scalar_t | fBeta1 |
| The Beta1 constant used by the optimizer. | |
| Scalar_t | fBeta2 |
| The Beta2 constant used by the optimizer. | |
| Scalar_t | fEpsilon |
| The Smoothing term used to avoid division by zero. | |
| std::vector< std::vector< Matrix_t > > | fFirstMomentBiases |
| The decaying average of the first moment of the past bias gradients associated with the deep net. | |
| std::vector< std::vector< Matrix_t > > | fFirstMomentWeights |
| The decaying average of the first moment of the past weight gradients associated with the deep net. | |
| std::vector< std::vector< Matrix_t > > | fSecondMomentBiases |
| The decaying average of the second moment of the past bias gradients associated with the deep net. | |
| std::vector< std::vector< Matrix_t > > | fSecondMomentWeights |
| The decaying average of the second moment of the past weight gradients associated with the deep net. | |
Protected Attributes inherited from TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t > | |
| DeepNet_t & | fDeepNet |
| The reference to the deep net. | |
| size_t | fGlobalStep |
| The current global step count during training. | |
| Scalar_t | fLearningRate |
| The learning rate used for training. | |
#include <TMVA/DNN/Adam.h>
|
default |
Destructor.
|
overrideprotectedvirtual |
Update the biases, given the current bias gradients.
Implements TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t >.
|
overrideprotectedvirtual |
Update the weights, given the current weight gradients.
Implements TMVA::DNN::VOptimizer< Architecture_t, Layer_t, DeepNet_t >.