Kalpana Kalpana (Editor)

Active set method

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In mathematical optimization, a problem is defined using an objective function to minimize or maximize, and a set of constraints

g 1 ( x ) 0 , , g k ( x ) 0

that define the feasible region, that is, the set of all x to search for the optimal solution. Given a point x in the feasible region, a constraint

g i ( x ) 0

is called active at x if g i ( x ) = 0 and inactive at x if g i ( x ) > 0. Equality constraints are always active. The active set at x is made up of those constraints g i ( x ) that are active at the current point (Nocedal & Wright 2006, p. 308).

The active set is particularly important in optimization theory as it determines which constraints will influence the final result of optimization. For example, in solving the linear programming problem, the active set gives the hyperplanes that intersect at the solution point. In quadratic programming, as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.

Active set methods

In general an active set algorithm has the following structure:

Find a feasible starting point repeat until "optimal enough" solve the equality problem defined by the active set (approximately) compute the Lagrange multipliers of the active set remove a subset of the constraints with negative Lagrange multipliers search for infeasible constraints end repeat

Methods that can be described as active set methods include:

  • Successive linear programming (SLP)
  • Sequential quadratic programming (SQP)
  • Sequential linear-quadratic programming (SLQP)
  • Reduced gradient method (RG)
  • Generalized reduced gradient method (GRG)
  • References

    Active set method Wikipedia