Girish Mahajan (Editor)

Computational complexity of mathematical operations

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Computational complexity of mathematical operations

The following tables list the running time of various algorithms for common mathematical operations.

Contents

Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used.

Note: Due to the variety of multiplication algorithms, M(n) below stands in for the complexity of the chosen multiplication algorithm.

Special functions

Many of the methods in this section are given in Borwein & Borwein.

Elementary functions

The elementary functions are constructed by composing arithmetic operations, the exponential function (exp), the natural logarithm (log), trigonometric functions (sin, cos), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either exp or log in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions.

Below, the size n refers to the number of digits of precision at which the function is to be evaluated.

It is not known whether O(M(n) log n) is the optimal complexity for elementary functions. The best known lower bound is the trivial bound Ω(M(n)).

Mathematical constants

This table gives the complexity of computing approximations to the given constants to n correct digits.

Number theory

Algorithms for number theoretical calculations are studied in computational number theory.

Matrix algebra

The following complexity figures assume that arithmetic with individual elements has complexity O(1), as is the case with fixed-precision floating-point arithmetic or operations on a finite field.

In 2005, Henry Cohn, Robert Kleinberg, Balázs Szegedy, and Chris Umans showed that either of two different conjectures would imply that the exponent of matrix multiplication is 2.

^* Because of the possibility of blockwise inverting a matrix, where an inversion of an n×n matrix requires inversion of two half-sized matrices and six multiplications between two half-sized matrices, and since matrix multiplication has a lower bound of Ω(n2 log n) operations, it can be shown that a divide and conquer algorithm that uses blockwise inversion to invert a matrix runs with the same time complexity as the matrix multiplication algorithm that is used internally.

References

Computational complexity of mathematical operations Wikipedia