Generic layer class.
This generic layer class represents a layer of a neural network with a given width n and activation function f. The activation function of each layer is given by \(\mathbf{u} = \mathbf{W}\mathbf{x} + \boldsymbol{\theta}\).
In addition to the weight and bias matrices, each layer allocates memory for its activations and the corresponding first partial fDerivatives of the activation function as well as the gradients of the fWeights and fBiases.
The layer provides member functions for the forward propagation of activations through the given layer.
Public Types | |
using | Matrix_t = typename Architecture_t::Matrix_t |
using | Scalar_t = typename Architecture_t::Scalar_t |
using | Tensor_t = typename Architecture_t::Tensor_t |
Public Member Functions | |
TLayer (const TLayer &) | |
TLayer (size_t BatchSize, size_t InputWidth, size_t Width, EActivationFunction f, Scalar_t dropoutProbability) | |
void | Backward (Matrix_t &gradients_backward, const Matrix_t &activations_backward, ERegularization r, Scalar_t weightDecay) |
Compute weight, bias and activation gradients. | |
void | Forward (Matrix_t &input, bool applyDropout=false) |
Compute activation of the layer for the given input. | |
EActivationFunction | GetActivationFunction () const |
Matrix_t & | GetActivationGradients () |
const Matrix_t & | GetActivationGradients () const |
size_t | GetBatchSize () const |
Matrix_t & | GetBiases () |
const Matrix_t & | GetBiases () const |
Matrix_t & | GetBiasGradients () |
const Matrix_t & | GetBiasGradients () const |
size_t | GetDropoutProbability () const |
size_t | GetInputWidth () const |
Matrix_t & | GetOutput () |
const Matrix_t & | GetOutput () const |
Matrix_t & | GetWeightGradients () |
const Matrix_t & | GetWeightGradients () const |
Matrix_t & | GetWeights () |
const Matrix_t & | GetWeights () const |
size_t | GetWidth () const |
void | Initialize (EInitialization m) |
Initialize fWeights according to the given initialization method. | |
void | Print () const |
void | SetDropoutProbability (Scalar_t p) |
Private Attributes | |
Matrix_t | fActivationGradients |
Gradients w.r.t. the activations of this layer. | |
size_t | fBatchSize |
Batch size used for training and evaluation. | |
Matrix_t | fBiases |
The bias values of this layer. | |
Matrix_t | fBiasGradients |
Gradients w.r.t. the bias values of this layer. | |
Matrix_t | fDerivatives |
First fDerivatives of the activations of this layer. | |
Scalar_t | fDropoutProbability |
Probability that an input is active. | |
EActivationFunction | fF |
Activation function of the layer. | |
size_t | fInputWidth |
Number of neurons of the previous layer. | |
Matrix_t | fOutput |
Activations of this layer. | |
Matrix_t | fWeightGradients |
Gradients w.r.t. the weights of this layer. | |
Matrix_t | fWeights |
The fWeights of this layer. | |
size_t | fWidth |
Number of neurons of this layer. | |
#include <TMVA/DNN/Layer.h>
using TMVA::DNN::TLayer< Architecture_t >::Matrix_t = typename Architecture_t::Matrix_t |
using TMVA::DNN::TLayer< Architecture_t >::Scalar_t = typename Architecture_t::Scalar_t |
using TMVA::DNN::TLayer< Architecture_t >::Tensor_t = typename Architecture_t::Tensor_t |
TMVA::DNN::TLayer< Architecture_t >::TLayer | ( | size_t | BatchSize, |
size_t | InputWidth, | ||
size_t | Width, | ||
EActivationFunction | f, | ||
Scalar_t | dropoutProbability | ||
) |
TMVA::DNN::TLayer< Architecture_t >::TLayer | ( | const TLayer< Architecture_t > & | layer | ) |
|
inline |
|
inline |
Compute activation of the layer for the given input.
The input must be in matrix form with the different rows corresponding to different events in the batch. Computes activations as well as the first partial derivative of the activation function at those activations.
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
auto TMVA::DNN::TLayer< Architecture_t >::Initialize | ( | EInitialization | m | ) |
void TMVA::DNN::TLayer< Architecture_t >::Print |
|
inline |
|
private |
|
private |
|
private |
|
private |
|
private |
|
private |
|
private |
|
private |
|
private |
|
private |
|
private |
|
private |