Steepest Gradient Descent algorithm (SGD)
Implements a steepest gradient descent minimization algorithm
Definition at line 334 of file NeuralNet.h.
Public Member Functions | |
Steepest (double learningRate=1e-4, double momentum=0.5, size_t repetitions=10) | |
c'tor More... | |
template<typename Function , typename Weights , typename PassThrough > | |
double | operator() (Function &fitnessFunction, Weights &weights, PassThrough &passThrough) |
operator to call the steepest gradient descent algorithm More... | |
Public Attributes | |
double | m_alpha |
internal parameter (learningRate) More... | |
double | m_beta |
internal parameter (momentum) More... | |
std::vector< double > | m_localGradients |
local gradients for reuse in thread. More... | |
std::vector< double > | m_localWeights |
local weights for reuse in thread. More... | |
std::vector< double > | m_prevGradients |
vector remembers the gradients of the previous step More... | |
size_t | m_repetitions |
#include <TMVA/NeuralNet.h>
|
inline |
c'tor
C'tor
learningRate | denotes the learning rate for the SGD algorithm |
momentum | fraction of the velocity which is taken over from the last step |
repetitions | re-compute the gradients each "repetitions" steps |
Definition at line 349 of file NeuralNet.h.
double TMVA::DNN::Steepest::operator() | ( | Function & | fitnessFunction, |
Weights & | weights, | ||
PassThrough & | passThrough | ||
) |
operator to call the steepest gradient descent algorithm
implementation of the steepest gradient descent algorithm
entry point to start the minimization procedure
fitnessFunction | (templated) function which has to be provided. This function is minimized |
weights | (templated) a reference to a container of weights. The result of the minimization procedure is returned via this reference (needs to support std::begin and std::end |
passThrough | (templated) object which can hold any data which the fitness function needs. This object is not touched by the minimizer; This object is provided to the fitness function when called |
Can be used with multithreading (i.e. "HogWild!" style); see call in trainCycle
Definition at line 271 of file NeuralNet.icc.
double TMVA::DNN::Steepest::m_alpha |
internal parameter (learningRate)
Definition at line 372 of file NeuralNet.h.
double TMVA::DNN::Steepest::m_beta |
internal parameter (momentum)
Definition at line 373 of file NeuralNet.h.
std::vector<double> TMVA::DNN::Steepest::m_localGradients |
local gradients for reuse in thread.
Definition at line 377 of file NeuralNet.h.
std::vector<double> TMVA::DNN::Steepest::m_localWeights |
local weights for reuse in thread.
Definition at line 376 of file NeuralNet.h.
std::vector<double> TMVA::DNN::Steepest::m_prevGradients |
vector remembers the gradients of the previous step
Definition at line 374 of file NeuralNet.h.
size_t TMVA::DNN::Steepest::m_repetitions |
Definition at line 338 of file NeuralNet.h.