Girish Mahajan (Editor)

Cauchy–Schwarz inequality

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In mathematics, the Cauchy–Schwarz inequality, also known as the Cauchy–Bunyakovsky–Schwarz inequality, is a useful inequality encountered in many different settings, such as linear algebra, analysis, probability theory, vector algebra and other areas. It is considered to be one of the most important inequalities in all of mathematics. It has a number of generalizations, among them Hölder's inequality.

Contents

The inequality for sums was published by Augustin-Louis Cauchy (1821), while the corresponding inequality for integrals was first proved by Viktor Bunyakovsky (1859). The modern proof of the integral inequality was given by Hermann Amandus Schwarz (1888).

Statement of the inequality

The Cauchy–Schwarz inequality states that for all vectors u and v of an inner product space it is true that

| u , v | 2 u , u v , v

where , is the inner product. Examples of inner products include the real and complex dot product, see the examples in inner product. Equivalently, by taking the square root of both sides, and referring to the norms of the vectors, the inequality is written as

| u , v | u v .

Moreover, the two sides are equal if and only if u and v are linearly dependent (meaning they are parallel, one of the vector's magnitudes is zero, or one is a scalar multiple of the other).

If u 1 , , u n C and v 1 , , v n C have an imaginary component, the inner product is the standard complex inner product where the bar notation is used for complex conjugation and then the inequality may be restated more explicitly as

| u 1 v ¯ 1 + + u n v ¯ n | 2 ( | u 1 | 2 + + | u n | 2 ) ( | v 1 | 2 + + | v n | 2 )

or

| i = 1 n u i v ¯ i | 2 j = 1 n | u j | 2 k = 1 n | v k | 2 .

First proof

Let u and v be arbitrary vectors in a vector space over F with an inner product, where F is the field of real or complex numbers. We prove the inequality

| u , v | u v

and that equality holds only when either u or v is a multiple of the other (which includes the special case that either is the zero vector).

If v = 0 , it is clear that we have equality, and in this case u and v are also linearly dependent, regardless of u . Similarly if u = 0 . We henceforth assume that v is nonzero. We also assume that u , v 0 otherwise the inequality is obviously true, because neither u nor v can be negative.

Let

z = u u v = u u , v v , v v .

Then, by linearity of the inner product in its first argument, one has

z , v = u u , v v , v v , v = u , v u , v v , v v , v = 0.

Therefore, z is a vector orthogonal to the vector v (Indeed, z is the projection of u onto the plane orthogonal to v .) We can thus apply the Pythagorean theorem to

u = u , v v , v v + z

which gives

u 2 = | u , v v , v | 2 v 2 + z 2 = | u , v | 2 v 2 + z 2 | u , v | 2 v 2

and, after multiplication by v 2 , the Cauchy–Schwarz inequality. Moreover, if the relation in the above expression is actually an equality, then z 2 = 0 and hence z = 0 ; the definition of z then establishes a relation of linear dependence between u and v . This establishes the theorem.

Second proof

Let u and v be arbitrary vectors in a vector space F with an inner product, where F is the field of real or complex numbers.

In the special case that u , v = 0 , or if either u = 0 or v = 0 , then the theorem is trivially true.

Now assume that the special case above does not hold: that u 0 and v 0 . Let λ C be given by λ = v , u ¯ / v 2 then

0 u λ v 2 = u 2 λ v , u λ ¯ u , v + λ λ ¯ v 2 = u 2 | v , u | 2 v 2 | v , u | 2 v 2 + | v , u | 2 v 2 = u 2 | v , u | 2 v 2 .

Therefore, 0 u 2 | v , u | 2 v 2 , or | v , u | u v .

More proofs

There are indeed many different proofs of the Cauchy–Schwarz inequality other than the above two examples. When consulting other sources, there are often two sources of confusion. First, some authors define ⟨⋅,⋅⟩ to be linear in the second argument rather than the first. Second, some proofs are only valid when the field is ℝ and not ℂ.

R2 (ordinary two-dimensional space)

In the usual 2-dimensional space with the dot product, let v = ( v 1 , v 2 ) and u = ( u 1 , u 2 ) . The Cauchy–Schwarz inequality is that

( u , v ) 2 = ( u v cos θ ) 2 u 2 v 2

where θ is the angle between u and v .

The form above is perhaps the easiest in which to understand the inequality, since the square of the cosine can be at most 1, which occurs when the vectors are in the same or opposite directions. It can also be restated in terms of the vector coordinates v 1 , v 2 , u 1 and u 2 as

( u 1 v 1 + u 2 v 2 ) 2 ( u 1 2 + u 2 2 ) ( v 1 2 + v 2 2 )

where equality holds if and only if the vector ( u 1 , u 2 ) is in the same or opposite direction as the vector ( v 1 , v 2 ) , or if one of them is the zero vector.

Rn (n-dimensional Euclidean space)

In Euclidean space R n with the standard inner product, the Cauchy–Schwarz inequality is

( i = 1 n u i v i ) 2 ( i = 1 n u i 2 ) ( i = 1 n v i 2 )

The Cauchy–Schwarz inequality can be proved using only ideas from elementary algebra in this case. Consider the following quadratic polynomial in x

0 ( u 1 x + v 1 ) 2 + + ( u n x + v n ) 2 = ( u i 2 ) x 2 + 2 ( u i v i ) x + v i 2 .

Since it is nonnegative, it has at most one real root for x , hence its discriminant is less than or equal to zero. That is,

( ( u i v i ) ) 2 u i 2 v i 2 0 ,

which yields the Cauchy–Schwarz inequality.

L2

For the inner product space of square-integrable complex-valued functions, one has

| R n f ( x ) g ( x ) ¯ d x | 2 R n | f ( x ) | 2 d x R n | g ( x ) | 2 d x .

A generalization of this is the Hölder inequality.

Analysis

The triangle inequality for the standard norm is often shown as a consequence of the Cauchy–Schwarz inequality, as follows: given vectors x and y:

x + y 2 = x + y , x + y = x 2 + x , y + y , x + y 2 = x 2 + 2 x , y + y 2 x 2 + 2 | x , y | + y 2 x 2 + 2 x y + y 2 = ( x + y ) 2

Taking square roots gives the triangle inequality.

The Cauchy–Schwarz inequality is used to prove that the inner product is a continuous function with respect to the topology induced by the inner product itself.

Geometry

The Cauchy–Schwarz inequality allows one to extend the notion of "angle between two vectors" to any real inner product space, by defining:

cos θ x y = x , y x y .

The Cauchy–Schwarz inequality proves that this definition is sensible, by showing that the right-hand side lies in the interval [−1, 1], and justifies the notion that (real) Hilbert spaces are simply generalizations of the Euclidean space. It can also be used to define an angle in complex inner product spaces, by taking the absolute value or the real part of the right-hand side, as is done when extracting a metric from quantum fidelity.

Probability theory

Let X, Y be random variables, then the covariance inequality is given by:

Var ( Y ) Cov ( Y , X ) Cov ( Y , X ) Var ( X ) .

After defining an inner product on the set of random variables using the expectation of their product,

X , Y := E ( X Y ) ,

then the Cauchy–Schwarz inequality becomes

| E ( X Y ) | 2 E ( X 2 ) E ( Y 2 ) .

To prove the covariance inequality using the Cauchy–Schwarz inequality, let μ = E ( X ) and ν = E ( Y ) , then

| Cov ( X , Y ) | 2 = | E ( ( X μ ) ( Y ν ) ) | 2 = | X μ , Y ν | 2 X μ , X μ Y ν , Y ν = E ( ( X μ ) 2 ) E ( ( Y ν ) 2 ) = Var ( X ) Var ( Y ) ,

where Var denotes variance and Cov denotes covariance.

Generalizations

Various generalizations of the Cauchy–Schwarz inequality exist in the context of operator theory, e.g. for operator-convex functions, and operator algebras, where the domain and/or range are replaced by a C*-algebra or W*-algebra.

An inner product can be used to define a positive linear functional. For example, given a Hilbert space L 2 ( m ) , m being a finite measure, the standard inner product gives rise to a positive functional ϕ by ϕ ( g ) = g , 1 . Conversely, every positive linear functional ϕ on L 2 ( m ) can be used to define an inner product f , g ϕ := ϕ ( g f ) where g is the pointwise complex conjugate of g . In this language, the Cauchy–Schwarz inequality becomes

| ϕ ( g f ) | 2 ϕ ( f f ) ϕ ( g g )

which extends verbatim to positive functionals on C*-algebras:

Theorem (Cauchy–Schwarz inequality for positive functionals on C*-algebras) If ϕ is a positive linear functional on a C*-algebra A , then for all a , b A , | ϕ ( b a ) | 2 ϕ ( b b ) ϕ ( a a ) .

The next two theorems are further examples in operator algebra.

Theorem (Kadison–Schwarz inequality, named after Richard Kadison) If ϕ is a unital positive map, then for every normal element a in its domain, we have ϕ ( a a ) ϕ ( a ) ϕ ( a ) and ϕ ( a a ) ϕ ( a ) ϕ ( a ) .

This extends the fact φ ( a a ) 1 φ ( a ) φ ( a ) = | φ ( a ) | 2 , when φ is a linear functional. The case when a is self-adjoint, i.e. a = a , is sometimes known as Kadison's inequality.

Theorem (Modified Schwarz inequality for 2-positive maps). For a 2-positive map ϕ between C*-algebras, for all a , b in its domain,

ϕ ( a ) ϕ ( a ) ϕ ( 1 ) ϕ ( a a ) ,  and  ϕ ( a b ) 2 ϕ ( a a ) ϕ ( b b ) .

References

Cauchy–Schwarz inequality Wikipedia


Similar Topics