In this post I am going to give brief overview of few of the common optimization techniques used in training a neural network from simple classification problems to deep learning. As we know, the critical part of a classification algorithm is to optimize the loss (objective) function in order to learn the correct parameters of the model. The type of the objective function (convex, non-convex, constrained, unconstrained etc.) along with the method used to optimize this function decides how accurately and efficiently we compute the parameters. In the context of neural networks, the loss function could be a squared error loss or logistic loss (cross entropy).
Following is the cross entropy loss function for a classification problem with m output nodes ('m' class problem):
, the outer sum is over all training instances Q.
where are the actual outputs (0 or 1. Out of the 'm' output nodes only one node will have actual output of 1 the rest will have actual output of 0, similarly are the corresponding predicted probabilities from the 'm' output nodes, are the weights for the neural network, connecting the i-th node in the previous layer to the j-th node in the next layer. We need to minimize the above loss function in order to compute the optimal weights that minimizes this function.
The predicted probabilities are the probabilities for predicting each of the 'm' class labels, given by the softmax (in multi-class problems) i.e.
where are the inputs to the output layer.
In order to solve the optimization problem, the most common class of technique deployed is the line search method. In the line search method, in order to minimize a function f(x), we iteratively find the actual solution x*, such that starting with some random solution , we find a direction and the amount of step to take along the direction , that will take us closer to the actual solution, i.e.
The step size is chosen is such a way s.t. is minimized.
Continue to repeat the above procedure by choosing a new direction to optimize in each iteration and then computing the step size for the chosen direction. The step size can be fixed for all iterations or can be computed dynamically in each iteration.
Repeat the above procedure until convergence is reached, i.e.
for some error bound .
The algorithms we are going to discuss in this post are all variants of the line search algorithm. In each of these variant, the method of choosing the search direction and the step sizes differ only. The algorithms we are going to discuss are Gradient Descent and its variants (SGD, Mini-Batch SGD, Momentum, AdaGrad, Adam, etc.), the Conjugate Gradient algorithm and the L-BFGS optimization algorithm.
Given a function f(x), its gradient i.e. f'(x) gives the direction of greatest increase of the function, thus, the negative of the gradient of the function gives the direction in which the function f(x) decreases the most. The gradient descent algorithm uses this information to update the weight parameters of the neural network. The step size or the learning rate in gradient descent is usually kept at a constant value .
, where w is the vector of weights
is the vector of gradients of the loss w.r.t. each weight
where is the optimal value of the weight w in the current iteration and is the optimal value of the weight in the previous iteration. The direction of the greatest decrease in the loss function L w.r.t. the weights w is given by .
Note that if the shape of the loss L is a concave parabola near the current solution as shown in the below diagram by the point E, then as the weight increases (move right towards the weights axis), the loss decreases till we hit the minima. Thus at the current solution , the gradient of the loss is negative, and thus the weight increases in the next iteration, in order to lower the loss.
For the weights in the last layer, the value of the gradient is obtained to be (as derived in these notes) :
where is the weight connecting the i-th node in last hidden layer to j-th node in output layer and is the output from the i-th node in hidden layer.
Thus the update equation for the weights becomes :
where denotes the summation over all training instances.
The weights in the backward layers are updated based on the backpropagation algorithm (which we will not go in details in this post). Note that to update each weight, we need to compute the gradient for all the examples Q first, then one of the weights gets updated. This might be slow.
In the Stochastic Gradient Descent (SGD) variant, instead of computing the gradients with all the examples, it randomly samples an example and does the gradient update.
SGD can easily be computed in online manner based only on single examples and thus is really helpful for real-time machine learning.
With only a single example, there could be a lot of fluctuation in the weight update and some weights might converge to local minima and some may not converge at all. Batch gradient descent is very good for convex functions or functions with very less number of local minima. Whereas for a function with lots of local minima, searching in zig-zag manner in SGD might actually help us reach the global minima faster.
The first contour plot below represents the learning with normal gradient descent whereas the one below that, represents the learning with stochastic gradient descent.
In yet another variant, instead of considering only a singe example in SGD, we consider a mini-batch of examples randomly sampled from the instances :
where indicates that the summation is over a subset of examples, randomly sampled from the set Q. With mini-batch stochastic gradient descent, the convergence is much more stable and fast. Moreover mini-batch SGD can utilize different efficient matrix and vector libraries for faster computation as compared to SGD. There is a nice comparison of mini-batch SGD vs. SGD.
Major drawback with the SGD approach is that the learning rate (or the step size) is fixed. Choosing a high value of the learning rate, the algorithm may not converge at all, whereas choosing a small value, it will take a long time for the algorithm to converge, if at all it converges.
SGD with Momentum :
The idea of utilizing momentum to speed up SGD is similar to a ball rolling down a hill. As the ball rolls down the hill, it gains further momentum to roll down in the same direction at a greater speed.
In the equation for SGD, we add another parameter, which is the update from the previous iteration, scaled by , the momentum parameter. Usually the momentum learning rate is kept somewhere around 0.9.
If in the previous iteration, the weight has increased i.e. , which means that we are going 'downhill' (lowering the loss), and in the current iteration, the gradient is negative, i.e. , then the increase in the weight is higher as compared to that without the momentum (effective learning rate has increased), i.e. we are moving 'downhill' with a greater speed.
The momentum based approach can be improved further using the Nesterov Accelerated Gradient (NAG) Descent technique. This technique is similar to the momentum but differs in how the update step is computed :
Intuitively, what NAG does is similar to momentum (blue arrows), that it utilizes the previous update to make the current update slower or faster. The method looks at the gradient at the point which we would reach from the current point if we take the same step as in the previous iteration. If the gradient at the possible new point is still negative, then we take a larger stride in the same direction (implying we are good to move in that direction) depending on "how far we are good to go" based on the magnitude of the gradient , else if it is positive, either we need to slow down the learning rate or take a step back from the current position and "re-consider" our steps (implying possible "danger").
The brown arrow represents the initial step taken by the NAG and the red arrow, represents the correction in step i.e. , whereas the green arrows represents the "good to go" step.
Adaptive Gradient (AdaGrad)
Consider the input layer of the neural network, where there is a weight for each input node for the network. Given a classification problem, where the feature matrix is sparse (common for text classification) i.e. most instances do not contain most of the features, some features might be frequent across instances whereas some features might be too infrequent.
During the weight updates with backpropagation, the weights for the frequent features will be updated frequently whereas the weights for the infrequent features will be updated much less frequently. Due to this, some important but infrequent features may not converge towards their correct weights. To overcome this, the learning rate is dynamically increased for the infrequent weights, whereas it is decreased for the frequent ones.
In the equation for the gradient descent :
where is the sum of squares of the gradients for u, from 1 to (t-1), i.e.
and is a constant between 0 and 1.
will be lower for infrequent weights and thus the effective learning rate would be higher for these weights and vice versa. It somewhat eliminates the drawback of having a constant learning rate. Problem with Adagrad is that, for frequent but important weights, would be high and thus the steps taken by the current iteration would be very small and with each iteration, the step size would shrink and these weights may not converge at all.
To overcome this issue, we use a variant called the AdaDelta. Instead of computing the sum of the squares of all the past gradients, AdaDelta computes decayed root mean square values for the previous gradients. Instead of having to start with some constant learning rate , AdaDelta replaces it with the root mean square values of the previous step sizes. The AdaDelta algorithm looks like :
There are a few other variants of the gradient descent approach that solves the problem of SGD with adaptive learning rates (specially Adam). There is a nice documentation of all these methods in the following paper : "An overview of gradient descent optimization algorithms"
Conjugate Gradient Algorithm
Instead of keeping a constant learning rate in the line search gradient descent algorithm, we can optimize the learning rate in each iteration to minimize the loss, we can show that the path followed by the gradient descent algorithm are orthogonal to the previous one.
, then using the chain rule, implies , or
i.e. the gradients for two consecutive updates are orthogonal. This means that the path followed towards the minima would have lots of zig-zag and would not be optimal as we would be traversing the same direction multiple times.
In conjugate gradient algorithm, each direction taken in the line search algorithm is A-conjugate to all the previous directions and thus prevents searching any one direction multiple times if we do not find it useful to search in that direction. Thus it converges a lot faster compared to gradient descent algorithm. Two vectors u and v are said to be A-conjugate w.r.t. to each other if for a SPD (Symmetric Positive Definite) matrix A,we have
In conjugate gradient, the neural network weight updates are given as :
where the line search directions are given by and the step sizes are given by (compare this with the line search direction and step size in gradient descent). Additionally each direction is conjugate to w.r.t. to a symmetric positive definite matrix.
We start with some initial weight guess for the weights . Then compute the gradient . The initial direction is set to the gradient itself, i.e. .
The optimum value of the step size is computed by approximately minimizing the loss w.r.t. , i.e.
The updated weight is then computed as :
The direction for the next update is chosen by setting :
, where .
Intuitive this means that instead of choosing the new direction to be the gradient as in gradient descent, in conjugate gradient we choose a direction somewhere between the gradient and the previous search direction.
The value of can is computed using the below formula :
Limited Memory BFGS
Both gradient descent and conjugate gradient are first order methods, i.e. they compute the new search directions based upon the first order gradient of the loss function w.r.t. to the weights. Newton's method class of optimization computes the new search directions using the second order gradients (also called Hessian) of the loss function. For example, if a function f, is twice differentiable, then using the Taylor series expansion :
For multi-dimensional input vector X,
i.e. we are approximating the objective function around a point X, using just the first order and second order derivatives of the function f.
is the gradient vector
is the Hessian matrix H, where
In order to minimize the objective function we need to compute the derivative of the objective w.r.t. the change and set it to 0, i.e.
Solving for the optimal step needed to minimize the objective f, we get :
In the case of neural network, the weight updates translates into :
where is the step size for line search as earlier and is the new direction for the line search. is the inverse of the hessian matrix from the previous iteration and is the gradient vector from the previous iteration. The step sizes can be computed by backtracking line search or using the Wolfe Conditions.
Computing the hessian matrix for each iteration can be very expensive operation, given that there could potentially be billions of weights to update in a very deep neural network architecture and the hessian matrix would be of the order . In quasi-Newton approach, instead of computing the exact hessian in each iteration, we approximate the hessians from the previous hessian.
Without getting into the mathematical details, to compute the inverse hessian from , we need to solve the below constrained optimization problem :
, subject to and is symmetric.
Solving for the optimum value of , we obtain the solution for ,
The above method is known as the BFGS algorithm. We can choose the initial inverse hessian to be the identity matrix I. Given that the initial inverse hessian is provided to us, we can easily compute the hessian at the t-th time step, by using the above recursive relation, but we need to store all the values of and from time steps 1 to t in order to reconstruct from .
Storing the vectors and from time step 1 to t can take up considerable amount of memory. To overcome this we store only the last 'm' updates for and , i.e. for time steps 't-m-1' to 't'. This modification is known as the L-BFGS or Limited Memory BFGS.
A very nice comparison between SGD, CG and L-BFGS could be found in the the following paper. They discuss which method to use under what conditions.
Categories: MACHINE LEARNING