Logo ROOT  
Reference Guide
Loading...
Searching...
No Matches
TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t > Class Template Reference

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
class TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >

Adagrad Optimizer class.

This class represents the Adagrad Optimizer.

Definition at line 45 of file Adagrad.h.

Public Types

using Matrix_t = typename Architecture_t::Matrix_t
using Scalar_t = typename Architecture_t::Scalar_t

Public Member Functions

 TAdagrad (DeepNet_t &deepNet, Scalar_t learningRate=0.01, Scalar_t epsilon=1e-8)
 Constructor.
 ~TAdagrad ()=default
 Destructor.
Scalar_t GetEpsilon () const
 Getters.
std::vector< std::vector< Matrix_t > > & GetPastSquaredBiasGradients ()
std::vector< Matrix_t > & GetPastSquaredBiasGradientsAt (size_t i)
std::vector< std::vector< Matrix_t > > & GetPastSquaredWeightGradients ()
std::vector< Matrix_t > & GetPastSquaredWeightGradientsAt (size_t i)

Protected Member Functions

void UpdateBiases (size_t layerIndex, std::vector< Matrix_t > &biases, const std::vector< Matrix_t > &biasGradients) override
 Update the biases, given the current bias gradients.
void UpdateWeights (size_t layerIndex, std::vector< Matrix_t > &weights, const std::vector< Matrix_t > &weightGradients) override
 Update the weights, given the current weight gradients.

Protected Attributes

Scalar_t fEpsilon
 The Smoothing term used to avoid division by zero.
std::vector< std::vector< Matrix_t > > fPastSquaredBiasGradients
 The sum of the square of the past bias gradients associated with the deep net.
std::vector< std::vector< Matrix_t > > fPastSquaredWeightGradients
 The sum of the square of the past weight gradients associated with the deep net.
std::vector< std::vector< Matrix_t > > fWorkBiasTensor
 working tensor used to keep a temporary copy of bias or bias gradients
std::vector< std::vector< Matrix_t > > fWorkWeightTensor
 working tensor used to keep a temporary copy of weights or weight gradients

#include <TMVA/DNN/Adagrad.h>

Inheritance diagram for TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >:
VOptimizer< Architecture_t, VGeneralLayer< Architecture_t >, TDeepNet< Architecture_t, VGeneralLayer< Architecture_t > > >

Member Typedef Documentation

◆ Matrix_t

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
using TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::Matrix_t = typename Architecture_t::Matrix_t

Definition at line 47 of file Adagrad.h.

◆ Scalar_t

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
using TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::Scalar_t = typename Architecture_t::Scalar_t

Definition at line 48 of file Adagrad.h.

Constructor & Destructor Documentation

◆ TAdagrad()

template<typename Architecture_t, typename Layer_t, typename DeepNet_t>
TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::TAdagrad ( DeepNet_t & deepNet,
Scalar_t learningRate = 0.01,
Scalar_t epsilon = 1e-8 )

Constructor.

Definition at line 90 of file Adagrad.h.

◆ ~TAdagrad()

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::~TAdagrad ( )
default

Destructor.

Member Function Documentation

◆ GetEpsilon()

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::GetEpsilon ( ) const
inline

Getters.

Definition at line 76 of file Adagrad.h.

◆ GetPastSquaredBiasGradients()

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredBiasGradients ( )
inline

Definition at line 81 of file Adagrad.h.

◆ GetPastSquaredBiasGradientsAt()

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredBiasGradientsAt ( size_t i)
inline

Definition at line 82 of file Adagrad.h.

◆ GetPastSquaredWeightGradients()

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< std::vector< Matrix_t > > & TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredWeightGradients ( )
inline

Definition at line 78 of file Adagrad.h.

◆ GetPastSquaredWeightGradientsAt()

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector< Matrix_t > & TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::GetPastSquaredWeightGradientsAt ( size_t i)
inline

Definition at line 79 of file Adagrad.h.

◆ UpdateBiases()

template<typename Architecture_t, typename Layer_t, typename DeepNet_t>
auto TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::UpdateBiases ( size_t layerIndex,
std::vector< Matrix_t > & biases,
const std::vector< Matrix_t > & biasGradients )
overrideprotected

Update the biases, given the current bias gradients.

Definition at line 158 of file Adagrad.h.

◆ UpdateWeights()

template<typename Architecture_t, typename Layer_t, typename DeepNet_t>
auto TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::UpdateWeights ( size_t layerIndex,
std::vector< Matrix_t > & weights,
const std::vector< Matrix_t > & weightGradients )
overrideprotected

Update the weights, given the current weight gradients.

Definition at line 126 of file Adagrad.h.

Member Data Documentation

◆ fEpsilon

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
Scalar_t TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::fEpsilon
protected

The Smoothing term used to avoid division by zero.

Definition at line 51 of file Adagrad.h.

◆ fPastSquaredBiasGradients

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::fPastSquaredBiasGradients
protected

The sum of the square of the past bias gradients associated with the deep net.

Definition at line 56 of file Adagrad.h.

◆ fPastSquaredWeightGradients

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::fPastSquaredWeightGradients
protected

The sum of the square of the past weight gradients associated with the deep net.

Definition at line 54 of file Adagrad.h.

◆ fWorkBiasTensor

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::fWorkBiasTensor
protected

working tensor used to keep a temporary copy of bias or bias gradients

Definition at line 60 of file Adagrad.h.

◆ fWorkWeightTensor

template<typename Architecture_t, typename Layer_t = VGeneralLayer<Architecture_t>, typename DeepNet_t = TDeepNet<Architecture_t, Layer_t>>
std::vector<std::vector<Matrix_t> > TMVA::DNN::TAdagrad< Architecture_t, Layer_t, DeepNet_t >::fWorkWeightTensor
protected

working tensor used to keep a temporary copy of weights or weight gradients

Definition at line 58 of file Adagrad.h.


The documentation for this class was generated from the following file: