Girish Mahajan (Editor)

Duality gap

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In optimization problems in applied mathematics, the duality gap is the difference between the primal and dual solutions. If d is the optimal dual value and p is the optimal primal value then the duality gap is equal to p d . This value is always greater than or equal to 0 (for minimization problems). The duality gap is zero if and only if strong duality holds. Otherwise the gap is strictly positive and weak duality holds.

In general given two dual pairs separated locally convex spaces ( X , X ) and ( Y , Y ) . Then given the function f : X R { + } , we can define the primal problem by

inf x X f ( x ) .

If there are constraint conditions, these can be built into the function f by letting f = f + I c o n s t r a i n t s where I is the indicator function. Then let F : X × Y R { + } be a perturbation function such that F ( x , 0 ) = f ( x ) . The duality gap is the difference given by

inf x X [ F ( x , 0 ) ] sup y Y [ F ( 0 , y ) ]

where F is the convex conjugate in both variables.

In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of the convex relaxation of the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closed convex hull and with replacing a non-convex function with its convex closure, that is the function that has the epigraph that is the closed convex hull of the original primal objective function.

References

Duality gap Wikipedia