template<typename AReal>
class TMVA::DNN::TReference< AReal >
The reference architecture class.
Class template that contains the reference implementation of the low-level interface for the DNN implementation. The reference implementation uses the TMatrixT class template to represent matrices.
- Template Parameters
-
AReal | The floating point type used to represent scalars. |
Definition at line 37 of file Reference.h.
|
|
Low-level functions required for the forward propagation of activations through the network.
|
static void | MultiplyTranspose (TMatrixT< Scalar_t > &output, const TMatrixT< Scalar_t > &input, const TMatrixT< Scalar_t > &weights) |
| Matrix-multiply input with the transpose of and write the results into output . More...
|
|
static void | AddRowWise (TMatrixT< Scalar_t > &output, const TMatrixT< Scalar_t > &biases) |
| Add the vectors biases row-wise to the matrix output. More...
|
|
|
Low-level functions required for the forward propagation of activations through the network.
|
static void | Backward (TMatrixT< Scalar_t > &activationGradientsBackward, TMatrixT< Scalar_t > &weightGradients, TMatrixT< Scalar_t > &biasGradients, TMatrixT< Scalar_t > &df, const TMatrixT< Scalar_t > &activationGradients, const TMatrixT< Scalar_t > &weights, const TMatrixT< Scalar_t > &activationBackward) |
| Perform the complete backward propagation step. More...
|
|
static void | ScaleAdd (TMatrixT< Scalar_t > &A, const TMatrixT< Scalar_t > &B, Scalar_t beta=1.0) |
| Adds a the elements in matrix B scaled by c to the elements in the matrix A. More...
|
|
static void | Copy (TMatrixT< Scalar_t > &A, const TMatrixT< Scalar_t > &B) |
|
|
For each activation function, the low-level interface contains two routines.
One that applies the acitvation function to a matrix and one that evaluate the derivatives of the activation function at the elements of a given matrix and writes the results into the result matrix.
|
static void | Identity (TMatrixT< AReal > &B) |
|
static void | IdentityDerivative (TMatrixT< AReal > &B, const TMatrixT< AReal > &A) |
|
static void | Relu (TMatrixT< AReal > &B) |
|
static void | ReluDerivative (TMatrixT< AReal > &B, const TMatrixT< AReal > &A) |
|
static void | Sigmoid (TMatrixT< AReal > &B) |
|
static void | SigmoidDerivative (TMatrixT< AReal > &B, const TMatrixT< AReal > &A) |
|
static void | Tanh (TMatrixT< AReal > &B) |
|
static void | TanhDerivative (TMatrixT< AReal > &B, const TMatrixT< AReal > &A) |
|
static void | SymmetricRelu (TMatrixT< AReal > &B) |
|
static void | SymmetricReluDerivative (TMatrixT< AReal > &B, const TMatrixT< AReal > &A) |
|
static void | SoftSign (TMatrixT< AReal > &B) |
|
static void | SoftSignDerivative (TMatrixT< AReal > &B, const TMatrixT< AReal > &A) |
|
static void | Gauss (TMatrixT< AReal > &B) |
|
static void | GaussDerivative (TMatrixT< AReal > &B, const TMatrixT< AReal > &A) |
|
|
Loss functions compute a scalar value given the output of the network for a given training input and the expected network prediction Y that quantifies the quality of the prediction.
For each function also a routing that computes the gradients (suffixed by Gradients) must be provided for the starting of the backpropagation algorithm.
|
static AReal | MeanSquaredError (const TMatrixT< AReal > &Y, const TMatrixT< AReal > &output) |
|
static void | MeanSquaredErrorGradients (TMatrixT< AReal > &dY, const TMatrixT< AReal > &Y, const TMatrixT< AReal > &output) |
|
static AReal | CrossEntropy (const TMatrixT< AReal > &Y, const TMatrixT< AReal > &output) |
| Sigmoid transformation is implicitly applied, thus output should hold the linear activations of the last layer in the net. More...
|
|
static void | CrossEntropyGradients (TMatrixT< AReal > &dY, const TMatrixT< AReal > &Y, const TMatrixT< AReal > &output) |
|
static AReal | SoftmaxCrossEntropy (const TMatrixT< AReal > &Y, const TMatrixT< AReal > &output) |
| Softmax transformation is implicitly applied, thus output should hold the linear activations of the last layer in the net. More...
|
|
static void | SoftmaxCrossEntropyGradients (TMatrixT< AReal > &dY, const TMatrixT< AReal > &Y, const TMatrixT< AReal > &output) |
|
|
Output functions transform the activations output of the output layer in the network to a valid prediction YHat for the desired usage of the network, e.g.
the identity function for regression or the sigmoid transformation for two-class classification.
|
static void | Sigmoid (TMatrixT< AReal > &YHat, const TMatrixT< AReal > &) |
|
static void | Softmax (TMatrixT< AReal > &YHat, const TMatrixT< AReal > &) |
|
|
For each regularization type two functions are required, one named <Type>Regularization that evaluates the corresponding regularization functional for a given weight matrix and the Add<Type>RegularizationGradients , that adds the regularization component in the gradients to the provided matrix.
|
static AReal | L1Regularization (const TMatrixT< AReal > &W) |
|
static void | AddL1RegularizationGradients (TMatrixT< AReal > &A, const TMatrixT< AReal > &W, AReal weightDecay) |
|
static AReal | L2Regularization (const TMatrixT< AReal > &W) |
|
static void | AddL2RegularizationGradients (TMatrixT< AReal > &A, const TMatrixT< AReal > &W, AReal weightDecay) |
|
|
For each initialization method, one function in the low-level interface is provided.
The naming scheme is
Initialize<Type>
for a given initialization method Type.
|
static void | InitializeGauss (TMatrixT< AReal > &A) |
|
static void | InitializeUniform (TMatrixT< AReal > &A) |
|
static void | InitializeIdentity (TMatrixT< AReal > &A) |
|
static void | InitializeZero (TMatrixT< AReal > &A) |
|
|
static void | Dropout (TMatrixT< AReal > &A, AReal dropoutProbability) |
| Apply dropout with activation probability p to the given matrix A and scale the result by reciprocal of p . More...
|
|