![]() | ||
Coordinate descent is a derivative-free optimization algorithm. To find a local minimum of a function, one does line search along one coordinate direction at the current point in each iteration. One uses different coordinate directions cyclically throughout the procedure.
Contents
Description
Coordinate descent is based on the idea that the minimization of a multivariable function
round
for each variable
Thus, one begins with an initial guess
By doing line search in each iteration, one automatically has
It can be shown that this sequence has similar convergence properties as steepest descent. No improvement after one cycle of line search along coordinate directions implies a stationary point is reached.
This process is illustrated below.
Differentiable case
In the case of a continuously differentiable function F, a coordinate descent algorithm can be sketched as:
The step size can be chosen in various ways, e.g., by solving for the exact minimizer of f(xi) = F(x) (i.e., F with all variables but xi fixed), or by traditional line search criteria.
Limitations
Coordinate descent has two problems. One of them is having a non-smooth multivariable function. The following picture shows that coordinate descent iteration may get stuck at a non-stationary point if the level curves of a function are not smooth. Suppose that the algorithm is at the point (-2, -2); then there are two axis-aligned directions it can consider for taking a step, indicated by the red arrows. However, every step along these two directions will increase the objective function's value (assuming a minimization problem), so the algorithm will not take any step, even though both steps together would bring the algorithm closer to the optimum.
The other problem is difficulty in parallelism. Since the nature of Coordinate Descent is to cycle through the directions and minimize the objective function with respect to each coordinate direction, Coordinate Descent is not an obvious candidate for massive parallelism. Recent research works have shown that massive parallelism is applicable to Coordinate Descent by relaxing the change of the objective function with respect to each coordinate direction.
Applications
Coordinate descent algorithms are popular with practitioners owing to their simplicity, but the same property has led optimization researchers to largely ignore them in favor of more interesting (complicated) methods. An early application of coordinate descent optimization was in the area of computed tomography where it has been found to have rapid convergence and was subsequently used for clinical multi-slice helical scan CT reconstruction. Moreover, there has been increased interest in the use of coordinate descent with the advent of large-scale problems in machine learning, where coordinate descent has been shown competitive to other methods when applied to such problems as training linear support vector machines (see LIBLINEAR) and non-negative matrix factorization. They are attractive for problems where computing gradients is infeasible, perhaps because the data required to do so are distributed across computer networks.