In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.
The Jack function
J
κ
(
α
)
(
x
1
,
x
2
,
…
)
of integer partition
κ
, parameter
α
, and indefinitely many arguments
x
1
,
x
2
,
…
,
can be recursively defined as follows:
For m=1
J
k
(
α
)
(
x
1
)
=
x
1
k
(
1
+
α
)
⋯
(
1
+
(
k
−
1
)
α
)
For m>1
J
κ
(
α
)
(
x
1
,
x
2
,
…
,
x
m
)
=
∑
μ
J
μ
(
α
)
(
x
1
,
x
2
,
…
,
x
m
−
1
)
x
m
|
κ
/
μ
|
β
κ
μ
,
where the summation is over all partitions
μ
such that the skew partition
κ
/
μ
is a horizontal strip, namely
κ
1
≥
μ
1
≥
κ
2
≥
μ
2
≥
⋯
≥
κ
n
−
1
≥
μ
n
−
1
≥
κ
n
(
μ
n
must be zero or otherwise
J
μ
(
x
1
,
…
,
x
n
−
1
)
=
0
) and
β
κ
μ
=
∏
(
i
,
j
)
∈
κ
B
κ
μ
κ
(
i
,
j
)
∏
(
i
,
j
)
∈
μ
B
κ
μ
μ
(
i
,
j
)
,
where
B
κ
μ
ν
(
i
,
j
)
equals
κ
j
′
−
i
+
α
(
κ
i
−
j
+
1
)
if
κ
j
′
=
μ
j
′
and
κ
j
′
−
i
+
1
+
α
(
κ
i
−
j
)
otherwise. The expressions
κ
′
and
μ
′
refer to the conjugate partitions of
κ
and
μ
, respectively. The notation
(
i
,
j
)
∈
κ
means that the product is taken over all coordinates
(
i
,
j
)
of boxes in the Young diagram of the partition
κ
.
In 1997, F. Knop and S. Sahi gave a purely combinatorial formula for the Jack polynomials
J
μ
(
α
)
in n variables:
J
μ
(
α
)
=
∑
T
d
T
(
α
)
∏
s
∈
T
x
T
(
s
)
.
The sum is taken over all admissible tableaux of shape
λ
, and
d
T
(
α
)
=
∏
s
∈
T
critical
d
λ
(
α
)
(
s
)
with
d
λ
(
α
)
(
s
)
=
α
(
a
λ
(
s
)
+
1
)
+
(
l
λ
(
s
)
+
1
)
.
An admissible tableau of shape
λ
is a filling of the Young diagram
λ
with numbers 1,2,…,n such that for any box (i,j) in the tableau,
T(i,j) ≠ T( i',j) whenever i' > i.
T(i,j) ≠ T( i',j-1) whenever j>1 and i' < i.
A box
s
=
(
i
,
j
)
∈
λ
is critical for the tableau T if j>1 and
T
(
i
,
j
)
=
T
(
i
,
j
−
1
)
.
This result can be seen as a special case of the more general combinatorial formula for Macdonald polynomials.
The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product:
⟨
f
,
g
⟩
=
∫
[
0
,
2
π
]
n
f
(
e
i
θ
1
,
⋯
,
e
i
θ
n
)
g
(
e
i
θ
1
,
⋯
,
e
i
θ
n
)
¯
∏
1
≤
j
<
k
≤
n
|
e
i
θ
j
−
e
i
θ
k
|
2
/
α
d
θ
1
⋯
d
θ
n
This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as
C
κ
(
α
)
(
x
1
,
x
2
,
…
,
x
n
)
=
α
|
κ
|
(
|
κ
|
)
!
j
κ
J
κ
(
α
)
(
x
1
,
x
2
,
…
,
x
n
)
,
where
j
κ
=
∏
(
i
,
j
)
∈
κ
(
κ
j
′
−
i
+
α
(
κ
i
−
j
+
1
)
)
(
κ
j
′
−
i
+
1
+
α
(
κ
i
−
j
)
)
.
For
α
=
2
,
C
κ
(
2
)
(
x
1
,
x
2
,
…
,
x
n
)
denoted often as just
C
κ
(
x
1
,
x
2
,
…
,
x
n
)
is known as the Zonal polynomial.
The P normalization is given by the identity
J
λ
=
H
λ
′
P
λ
, where
H
λ
′
=
∏
s
∈
λ
(
α
a
λ
(
s
)
+
l
λ
(
s
)
+
1
)
and
a
λ
and
l
λ
denotes the arm and leg length respectively. Therefore, for
α
=
1
,
P
λ
is the usual Schur function.
Similar to Schur polynomials,
P
λ
can be expressed as a sum over Young tableaux. However, one need to add an extra weight to each tableau that depends on the parameter
α
.
Thus, a formula for the Jack function
P
λ
is given by
P
λ
=
∑
T
ψ
T
(
α
)
∏
s
∈
λ
x
T
(
s
)
where the sum is taken over all tableaux of shape
λ
, and
T
(
s
)
denotes the entry in box s of T.
The weight
ψ
T
(
α
)
can be defined in the following fashion: Each tableau T of shape
λ
can be interpreted as a sequence of partitions
∅
=
ν
1
→
ν
2
→
⋯
→
ν
n
=
λ
where
ν
i
+
1
/
ν
i
defines the skew shape with content i in T. Then
ψ
T
(
α
)
=
∏
i
ψ
ν
i
+
1
/
ν
i
(
α
)
where
ψ
λ
/
μ
(
α
)
=
∏
s
∈
R
λ
/
μ
−
C
λ
/
μ
(
α
a
μ
(
s
)
+
l
μ
(
s
)
+
1
)
(
α
a
μ
(
s
)
+
l
μ
(
s
)
+
α
)
(
α
a
λ
(
s
)
+
l
λ
(
s
)
+
α
)
(
α
a
λ
(
s
)
+
l
λ
(
s
)
+
1
)
and the product is taken only over all boxes s in
λ
such that s has a box from
λ
/
μ
in the same row, but not in the same column.
When
α
=
1
the Jack function is a scalar multiple of the Schur polynomial
J
κ
(
1
)
(
x
1
,
x
2
,
…
,
x
n
)
=
H
κ
s
κ
(
x
1
,
x
2
,
…
,
x
n
)
,
where
H
κ
=
∏
(
i
,
j
)
∈
κ
h
κ
(
i
,
j
)
=
∏
(
i
,
j
)
∈
κ
(
κ
i
+
κ
j
′
−
i
−
j
+
1
)
is the product of all hook lengths of
κ
.
If the partition has more parts than the number of variables, then the Jack function is 0:
J
κ
(
α
)
(
x
1
,
x
2
,
…
,
x
m
)
=
0
,
if
κ
m
+
1
>
0.
In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If
X
is a matrix with eigenvalues
x
1
,
x
2
,
…
,
x
m
, then
J
κ
(
α
)
(
X
)
=
J
κ
(
α
)
(
x
1
,
x
2
,
…
,
x
m
)
.