Trisha Shetty (Editor)

Lax equivalence theorem

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In numerical analysis, the Lax equivalence theorem is the fundamental theorem in the analysis of finite difference methods for the numerical solution of partial differential equations. It states that for a consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable.

The importance of the theorem is that while convergence of the solution of the finite difference method to the solution of the partial differential equation is what is desired, it is ordinarily difficult to establish because the numerical method is defined by a recurrence relation while the differential equation involves a differentiable function. However, consistency—the requirement that the finite difference method approximate the correct partial differential equation—is straightforward to verify, and stability is typically much easier to show than convergence (and would be needed in any event to show that round-off error will not destroy the computation). Hence convergence is usually shown via the Lax equivalence theorem.

Stability in this context means that a matrix norm of the matrix used in the iteration is at most unity, called (practical) Lax-Richtmyer stability. Often a von Neumann stability analysis is substituted for convenience, although von Neumann stability only implies Lax-Richtmyer stability in certain cases.

This theorem is due to Peter Lax. It is sometimes called the Lax–Richtmyer theorem, after Peter Lax and Robert D. Richtmyer.

References

Lax equivalence theorem Wikipedia