Implementation of the squared loss function.
The L1Regularizer gradient return the sign as the weight for the regularization.
The L2Regularizer gradient returns the weight as the gradient for the regularization.
A Model trait defines the needed operations on any learning Model.
Implementation of the logistic loss function.
A Loss trait defines the operation needed to compute the loss function, the prediction function, and the gradient for use in a LinearModel.
Implementation of the perceptron loss function.
A regularizer trait defines the gradient operation for computing regularized models.
Implementation of the squared loss function.
The ZeroRegularizer gradient simply returns 0 as the gradient.