Puneet Varma (Editor)

Bohr–Mollerup theorem

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In mathematical analysis, the Bohr–Mollerup theorem is a theorem named after the Danish mathematicians Harald Bohr and Johannes Mollerup, who proved it. The theorem characterizes the gamma function, defined for x > 0 by

Contents

Γ ( x ) = 0 t x 1 e t d t

as the only function  f  on the interval x > 0 that simultaneously has the three properties

  •  f (1) = 1, and
  •  f (x + 1) = x f (x) for x > 0 and
  •  f  is logarithmically convex.
  • An elegant treatment of this theorem is in Artin's book The Gamma Function, which has been reprinted by the AMS in a collection of Artin's writings.

    The theorem was first published in a textbook on complex analysis, as Bohr and Mollerup thought it had already been proved.

    Statement

    Bohr-Mollerup Theorem.     Γ(x) is the only function that satisfies  f (x + 1) = x f (x) with log( f (x)) convex and also with  f (1) = 1.

    Proof

    Let Γ(x) be a function with the assumed properties established above: Γ(x + 1) = xΓ(x) and log(Γ(x)) is convex, and Γ(1) = 1. From Γ(x + 1) = xΓ(x) we can establish

    Γ ( x + n ) = ( x + n 1 ) ( x + n 2 ) ( x + n 3 ) ( x + 1 ) x Γ ( x )

    The purpose of the stipulation that Γ(1) = 1 forces the Γ(x + 1) = xΓ(x) property to duplicate the factorials of the integers so we can conclude now that Γ(n) = (n − 1)! if nN and if Γ(x) exists at all. Because of our relation for Γ(x + n), if we can fully understand Γ(x) for 0 < x ≤ 1 then we understand Γ(x) for all real values of x.

    The slope of a line connecting two points (x1, log(Γ (x1))) and (x2, log(Γ (x2))), call it S(x1, x2), is monotonically increasing in each argument with x1 < x2 since we have stipulated log(Γ(x)) is convex. Thus, we know that

    S ( n 1 , n ) S ( n , n + x ) S ( n , n + 1 ) 0 < x 1 log ( Γ ( n ) ) log ( Γ ( n 1 ) ) n ( n 1 ) log ( Γ ( n ) ) log ( Γ ( n + x ) ) n ( n + x ) log ( Γ ( n ) ) log ( Γ ( n + 1 ) ) n ( n + 1 ) log ( ( n 1 ) ! ) log ( ( n 2 ) ! ) 1 log ( Γ ( n + x ) ) log ( ( n 1 ) ! ) x log ( n ! ) log ( ( n 1 ) ! ) 1 log ( ( n 1 ) ! ( n 2 ) ! ) log ( Γ ( n + x ) ) log ( ( n 1 ) ! ) x log ( n ! ( n 1 ) ! ) log ( n 1 ) log ( Γ ( n + x ) ) log ( ( n 1 ) ! ) x log ( n ) x log ( n 1 ) log ( Γ ( n + x ) ) log ( ( n 1 ) ! ) x log ( n ) log ( ( n 1 ) x ) + log ( ( n 1 ) ! ) log ( Γ ( n + x ) ) log ( n x ) + log ( ( n 1 ) ! ) log ( ( n 1 ) x ( n 1 ) ! ) log ( Γ ( n + x ) ) log ( n x ( n 1 ) ! ) ( n 1 ) x ( n 1 ) ! Γ ( n + x ) n x ( n 1 ) ! log is monotonically increasing ( n 1 ) x ( n 1 ) ! ( x + n 1 ) ( x + n 2 ) ( x + 1 ) x Γ ( x ) n x ( n 1 ) ! ( n 1 ) x ( n 1 ) ! ( x + n 1 ) ( x + n 2 ) ( x + 1 ) x Γ ( x ) n x ( n 1 ) ! ( x + n 1 ) ( x + n 2 ) ( x + 1 ) x ( n 1 ) x ( n 1 ) ! ( x + n 1 ) ( x + n 2 ) ( x + 1 ) x Γ ( x ) n x n ! ( x + n ) ( x + n 1 ) ( x + 1 ) x ( n + x n )

    The last line is a strong statement. In particular, it is true for all values of n. That is Γ(x) is not greater than the right hand side for any choice of n and likewise, Γ(x) is not less than the left hand side for any other choice of n. Each single inequality stands alone and may be interpreted as an independent statement. Because of this fact, we are free to choose different values of n for the RHS and the LHS. In particular, if we keep n for the RHS and choose n + 1 for the LHS we get:

    ( ( n + 1 ) 1 ) x ( ( n + 1 ) 1 ) ! ( x + ( n + 1 ) 1 ) ( x + ( n + 1 ) 2 ) ( x + 1 ) x Γ ( x ) n x n ! ( x + n ) ( x + n 1 ) ( x + 1 ) x ( n + x n ) n x n ! ( x + n ) ( x + n 1 ) ( x + 1 ) x Γ ( x ) n x n ! ( x + n ) ( x + n 1 ) ( x + 1 ) x ( n + x n )

    It is evident from this last line that a function is being sandwiched between two expressions, a common analysis technique to prove various things such as the existence of a limit, or convergence. Let n → ∞:

    lim n n + x n = 1

    so the left side of the last inequality is driven to equal the right side in the limit and

    n x n ! ( x + n ) ( x + n 1 ) ( x + 1 ) x

    is sandwiched in between. This can only mean that

    lim n n x n ! ( x + n ) ( x + n 1 ) ( x + 1 ) x = Γ ( x ) .

    In the context of this proof this means that

    lim n n x n ! ( x + n ) ( x + n 1 ) ( x + 1 ) x

    has the three specified properties belonging to Γ(x). Also, the proof provides a specific expression for Γ(x). And the final critical part of the proof is to remember that the limit of a sequence is unique. This means that for any choice of 0 < x ≤ 1 only one possible number Γ(x) can exist. Therefore there is no other function with all the properties assigned to Γ(x).

    The remaining loose end is the question of proving that Γ(x) makes sense for all x where

    lim n n x n ! ( x + n ) ( x + n 1 ) ( x + 1 ) x

    exists. The problem is that our first double inequality

    S ( n 1 , n ) S ( n + x , n ) S ( n + 1 , n )

    was constructed with the constraint 0 < x ≤ 1. If, say, x > 1 then the fact that S is monotonically increasing would make S(n + 1, n) < S(n + x, n), contradicting the inequality upon which the entire proof is constructed. But notice

    Γ ( x + 1 ) = lim n x ( n x n ! ( x + n ) ( x + n 1 ) ( x + 1 ) x ) n n + x + 1 Γ ( x ) = ( 1 x ) Γ ( x + 1 )

    which demonstrates how to bootstrap Γ(x) to all values of x where the limit is defined.

    References

    Bohr–Mollerup theorem Wikipedia