Trisha Shetty (Editor)

Linear matrix inequality

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In convex optimization, a linear matrix inequality (LMI) is an expression of the form

Contents

LMI ( y ) := A 0 + y 1 A 1 + y 2 A 2 + + y m A m 0

where

  • y = [ y i ,   i = 1 , , m ] is a real vector,
  • A 0 , A 1 , A 2 , , A m are n × n symmetric matrices S n ,
  • B 0 is a generalized inequality meaning B is a positive semidefinite matrix belonging to the positive semidefinite cone S + in the subspace of symmetric matrices S .
  • This linear matrix inequality specifies a convex constraint on y.

    Applications

    There are efficient numerical methods to determine whether an LMI is feasible (e.g., whether there exists a vector y such that LMI(y) ≥ 0), or to solve a convex optimization problem with LMI constraints. Many optimization problems in control theory, system identification and signal processing can be formulated using LMIs. Also LMIs find application in Polynomial Sum-Of-Squares. The prototypical primal and dual semidefinite program is a minimization of a real linear function respectively subject to the primal and dual convex cones governing this LMI.

    Solving LMIs

    A major breakthrough in convex optimization lies in the introduction of interior-point methods. These methods were developed in a series of papers and became of true interest in the context of LMI problems in the work of Yurii Nesterov and Arkadii Nemirovskii.

    References

    Linear matrix inequality Wikipedia