Samiksha Jaiswal (Editor)

Wolfe duality

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be found because of the weak duality principle.

Mathematical formulation

For a minimization problem with inequality constraints,

minimize x f ( x ) s u b j e c t t o g i ( x ) 0 , i = 1 , , m

the Lagrangian dual problem is

maximize u inf x ( f ( x ) + j = 1 m u j g j ( x ) ) s u b j e c t t o u i 0 , i = 1 , , m

where the objective function is the Lagrange dual function. Provided that the functions f and g 1 , , g m are continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem

maximize x , u f ( x ) + j = 1 m u j g j ( x ) s u b j e c t t o f ( x ) + j = 1 m u j g j ( x ) = 0 u i 0 , i = 1 , , m

is called the Wolfe dual problem. This problem employs the KKT conditions as a constraint. This problem may be difficult to deal with computationally, because the objective function is not concave in the joint variables ( u , x ) . Also, the equality constraint f ( x ) + j = 1 m u j g j ( x ) is nonlinear in general, so the Wolfe dual problem is typically a nonconvex optimization problem. In any case, weak duality holds.

References

Wolfe duality Wikipedia