In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence { m_{n} : n = 1, 2, 3, ... }, does there exist a positive Borel measure μ on the real line such that
m
n
=
∫
−
∞
∞
x
n
d
μ
(
x
)
?
In other words, an affirmative answer to the problem means that { m_{n} : n = 0, 1, 2, ... } is the sequence of moments of some positive Borel measure μ.
The Stieltjes moment problem, Vorobyev moment problem, and the Hausdorff moment problem are similar but replace the real line by
[
0
,
+
∞
)
(Stieltjes and Vorobyev; but Vorobyev formulates the problem in the terms of matrix theory), or a bounded interval (Hausdorff).
The Hamburger moment problem is solvable (that is, {m_{n}} is a sequence of moments) if and only if the corresponding Hankel kernel on the nonnegative integers
A
=
(
m
0
m
1
m
2
⋯
m
1
m
2
m
3
⋯
m
2
m
3
m
4
⋯
⋮
⋮
⋮
⋱
)
is positive definite, i.e.,
∑
j
,
k
≥
0
m
j
+
k
c
j
c
¯
k
≥
0
for an arbitrary sequence {c_{j}}_{j ≥ 0} of complex numbers with finite support (i.e. c_{j} = 0 except for finitely many values of j).
For the "only if" part of the claims simply note that
∑
j
,
k
≥
0
m
j
+
k
c
j
c
¯
k
=
∫
−
∞
∞

∑
j
≥
0
c
j
x
j

2
d
μ
(
x
)
which is nonnegative if
μ
is nonnegative.
We sketch an argument for the converse. Let Z^{+} be the nonnegative integers and F_{0}(Z^{+}) denote the family of complex valued sequences with finite support. The positive Hankel kernel A induces a (possibly degenerate) sesquilinear product on the family of complex valued sequences with finite support. This in turn gives a Hilbert space
(
H
,
⟨
,
⟩
)
whose typical element is an equivalence class denoted by [f].
Let e_{n} be the element in F_{0}(Z^{+}) defined by e_{n}(m) = δ_{nm}. One notices that
⟨
[
e
n
+
1
]
,
[
e
m
]
⟩
=
A
m
,
n
+
1
=
m
m
+
n
+
1
=
⟨
[
e
n
]
,
[
e
m
+
1
]
⟩
.
Therefore, the "shift" operator T on
H
, with T[e_{n}] = [e_{n + 1}], is symmetric.
On the other hand, the desired expression
m
n
=
∫
−
∞
∞
x
n
d
μ
(
x
)
.
suggests that μ is the spectral measure of a selfadjoint operator. If we can find a "function model" such that the symmetric operator T is multiplication by x, then the spectral resolution of a selfadjoint extension of T proves the claim.
A function model is given by the natural isomorphism from F_{0}(Z^{+}) to the family of polynomials, in one single real variable and complex coefficients: for n ≥ 0, identify e_{n} with x^{n}. In the model, the operator T is multiplication by x and a densely defined symmetric operator. It can be shown that T always has selfadjoint extensions. Let
T
¯
be one of them and μ be its spectral measure. So
⟨
T
¯
n
[
1
]
,
[
1
]
⟩
=
∫
x
n
d
μ
(
x
)
.
On the other hand,
⟨
T
¯
n
[
1
]
,
[
1
]
⟩
=
⟨
T
n
[
e
0
]
,
[
e
0
]
⟩
=
m
n
.
The solutions form a convex set, so the problem has either infinitely many solutions or a unique solution.
Consider the (n + 1)×(n + 1) Hankel matrix
Δ
n
=
[
m
0
m
1
m
2
⋯
m
n
m
1
m
2
m
3
⋯
m
n
+
1
m
2
m
3
m
4
⋯
m
n
+
2
⋮
⋮
⋮
⋱
⋮
m
n
m
n
+
1
m
n
+
2
⋯
m
2
n
]
.
Positivity of A means that for each n, det(Δ_{n}) ≥ 0. If det(Δ_{n}) = 0, for some n, then
(
H
,
⟨
,
⟩
)
is finitedimensional and T is selfadjoint. So in this case the solution to the Hamburger moment problem is unique and μ, being the spectral measure of T, has finite support.
More generally, the solution is unique if there are constants C and D such that for all n, m_{n}≤ CD^{n}n! (Reed & Simon 1975, p. 205). This follows from the more general Carleman's condition.
There are examples where the solution is not unique.
One can see that the Hamburger moment problem is intimately related to orthogonal polynomials on the real line. The Gram–Schmidt procedure gives a basis of orthogonal polynomials in which the operator:
T
¯
has a tridiagonal Jacobi matrix representation. This in turn leads to a tridiagonal model of positive Hankel kernels.
An explicit calculation of the Cayley transform of T shows the connection with what is called the Nevanlinna class of analytic functions on the left half plane. Passing to the noncommutative setting, this motivates Krein's formula which parametrizes the extensions of partial isometries.
The cumulative distribution function and the probability density function can often be found by applying the inverse Laplace transform to the moment generating function
m
(
t
)
=
∑
n
=
0
m
n
t
n
n
!
,
provided that this function converges.