Supriya Ghosh (Editor)

Descent direction

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In optimization, a descent direction is a vector p R n that, in the sense below, moves us closer towards a local minimum x of our objective function f : R n R .

Suppose we are computing x by an iterative method, such as line search. We define a descent direction p k R n at the k th iterate to be any p k such that p k , f ( x k ) < 0 , where , denotes the inner product. The motivation for such an approach is that small steps along p k guarantee that f is reduced, by Taylor's theorem.

Using this definition, the negative of a non-zero gradient is always a descent direction, as f ( x k ) , f ( x k ) = f ( x k ) , f ( x k ) < 0 .

Numerous methods exist to compute descent directions, all with differing merits. For example, one could use gradient descent or the conjugate gradient method.

More generally, if P is a positive definite matrix, then d = P f ( x ) is a descent direction at x . This generality is used in preconditioned gradient descent methods.

References

Descent direction Wikipedia