Historically, at the turn of the 20th century, the covariant derivative was introduced by Gregorio RicciCurbastro and Tullio LeviCivita in the theory of Riemannian and pseudoRiemannian geometry. Ricci and LeviCivita (following ideas of Elwin Bruno Christoffel) observed that the Christoffel symbols used to define the curvature could also provide a notion of differentiation which generalized the classical directional derivative of vector fields on a manifold. This new derivative – the LeviCivita connection – was covariant in the sense that it satisfied Riemann's requirement that objects in geometry should be independent of their description in a particular coordinate system.
It was soon noted by other mathematicians, prominent among these being Hermann Weyl, Jan Arnoldus Schouten, and Élie Cartan, that a covariant derivative could be defined abstractly without the presence of a metric. The crucial feature was not a particular dependence on the metric, but that the Christoffel symbols satisfied a certain precise second order transformation law. This transformation law could serve as a starting point for defining the derivative in a covariant manner. Thus the theory of covariant differentiation forked off from the strictly Riemannian context to include a wider range of possible geometries.
In the 1940s, practitioners of differential geometry began introducing other notions of covariant differentiation in general vector bundles which were, in contrast to the classical bundles of interest to geometers, not part of the tensor analysis of the manifold. By and large, these generalized covariant derivatives had to be specified ad hoc by some version of the connection concept. In 1950, JeanLouis Koszul unified these new ideas of covariant differentiation in a vector bundle by means of what is known today as a Koszul connection or a connection on a vector bundle. Using ideas from Lie algebra cohomology, Koszul successfully converted many of the analytic features of covariant differentiation into algebraic ones. In particular, Koszul connections eliminated the need for awkward manipulations of Christoffel symbols (and other analogous nontensorial) objects in differential geometry. Thus they quickly supplanted the classical notion of covariant derivative in many post1950 treatments of the subject.
The covariant derivative is a generalization of the directional derivative from vector calculus. As with the directional derivative, the covariant derivative is a rule,
∇
u
v
, which takes as its inputs: (1) a vector, u, defined at a point P, and (2) a vector field, v, defined in a neighborhood of P. The output is the vector
∇
u
v
(
P
)
, also at the point P. The primary difference from the usual directional derivative is that
∇
u
v
must, in a certain precise sense, be independent of the manner in which it is expressed in a coordinate system.
A vector may be described as a list of numbers in terms of a basis, but as a geometrical object a vector retains its own identity regardless of how one chooses to describe it in a basis. This persistence of identity is reflected in the fact that when a vector is written in one basis, and then the basis is changed, the components of the vector transform according to a change of basis formula. Such a transformation law is known as a covariant transformation. The covariant derivative is required to transform, under a change in coordinates, in the same way as a basis does: the covariant derivative must change by a covariant transformation (hence the name).
In the case of Euclidean space, one tends to define the derivative of a vector field in terms of the difference between two vectors at two nearby points. In such a system one translates one of the vectors to the origin of the other, keeping it parallel. With a Cartesian (fixed orthonormal) coordinate system "keeping it parallel" amounts to keeping the components constant. Thus is obtained the simplest example: a covariant derivative which is obtained by taking the ordinary directional derivative of the components in the direction of the displacement vector between the two nearby points.
In the general case, however, one must take into account the change of the coordinate system. For example, if the same covariant derivative is written in polar coordinates in a two dimensional Euclidean plane, then it contains extra terms that describe how the coordinate grid itself "rotates". In other cases the extra terms describe how the coordinate grid expands, contracts, twists, interweaves, etc. In this case "keeping it parallel" does not amount to keeping components constant under translation.
Consider the example of moving along a curve γ(t) in the Euclidean plane. In polar coordinates, γ may be written in terms of its radial and angular coordinates by γ(t) = (r(t), θ(t)). A vector at a particular time t (for instance, the acceleration of the curve) is expressed in terms of
(
e
r
,
e
θ
)
, where
e
r
and
e
θ
are unit tangent vectors for the polar coordinates, serving as a basis to decompose a vector in terms of radial and tangential components. At a slightly later time, the new basis in polar coordinates appears slightly rotated with respect to the first set. The covariant derivative of the basis vectors (the Christoffel symbols) serve to express this change.
In a curved space, such as the surface of the Earth (regarded as a sphere), the translation is not well defined and its analog, parallel transport, depends on the path along which the vector is translated.
A vector e on a globe on the equator at point Q is directed to the north. Suppose we parallel transport the vector first along the equator until at point P and then (keeping it parallel to itself) drag it along a meridian to the pole N and (keeping the direction there) subsequently transport it along another meridian back to Q. Then we notice that the paralleltransported vector along a closed circuit does not return as the same vector; instead, it has another orientation. This would not happen in Euclidean space and is caused by the curvature of the surface of the globe. The same effect can be noticed if we drag the vector along an infinitesimally small closed surface subsequently along two directions and then back. The infinitesimal change of the vector is a measure of the curvature.
The definition of the covariant derivative does not use the metric in space. However, for each metric there is a unique torsionfree covariant derivative called the LeviCivita connection such that the covariant derivative of the metric is zero.
The properties of a derivative imply that
∇
v
u
depends on an arbitrarily small neighborhood of a point p in the same way as e.g. the derivative of a scalar function along a curve at a given point p depends on an arbitrarily small neighborhood of p.
The information on the neighborhood of a point p in the covariant derivative can be used to define parallel transport of a vector. Also the curvature, torsion, and geodesics may be defined only in terms of the covariant derivative or other related variation on the idea of a linear connection.
Suppose a (pseudo) Riemann manifold
M
, is embedded into Euclidean space
(
R
n
,
⟨
⋅
;
⋅
⟩
)
via a (twice continuously) differentiable mapping
Ψ
→
:
R
d
⊃
U
→
R
n
such that the tangent space at
Ψ
→
(
p
)
∈
M
is spanned by the vectors
{
∂
Ψ
→
∂
x
i

p
:
i
∈
{
1
,
…
,
d
}
}
and the scalar product on
R
n
is compatible with the metric on M:
g
i
j
=
⟨
∂
Ψ
→
∂
x
i
;
∂
Ψ
→
∂
x
j
⟩
. (Since the manifold metric is always assumed to be regular, the compatibility condition implies linear independence of the partial derivative tangent vectors.)
For a tangent vector field
V
→
=
v
j
∂
Ψ
→
∂
x
j
,
one has
∂
V
→
∂
x
i
=
∂
v
j
∂
x
i
∂
Ψ
→
∂
x
j
+
v
j
∂
2
Ψ
→
∂
x
i
∂
x
j
. The last term is not tangential to M, but can be expressed as a linear combination of the tangent space base vectors using the Christoffel symbols as linear factors plus a vector normal to the tangent space:
∂
2
Ψ
→
∂
x
i
∂
x
j
=
Γ
k
i
j
∂
Ψ
→
∂
x
k
+
n
→
.
The covariant derivative
∇
e
i
V
→
, also written
∇
i
V
→
, is defined as just a tangential portion of the usual derivative:
∇
e
i
V
→
:=
∂
V
→
∂
x
i
−
n
→
=
(
∂
v
k
∂
x
i
+
v
j
Γ
k
i
j
)
∂
Ψ
→
∂
x
k
.
In the case of the LeviCivita connection
n
→
is required to be orthogonal to tangent space, so
⟨
∂
2
Ψ
→
∂
x
i
∂
x
j
;
∂
Ψ
→
∂
x
l
⟩
=
Γ
k
i
j
⟨
∂
Ψ
→
∂
x
k
;
∂
Ψ
→
∂
x
l
⟩
=
Γ
k
i
j
g
k
l
.
On the other hand,
∂
g
a
b
∂
x
c
=
⟨
∂
2
Ψ
→
∂
x
c
∂
x
a
;
∂
Ψ
→
∂
x
b
⟩
+
⟨
∂
Ψ
→
∂
x
a
;
∂
2
Ψ
→
∂
x
c
∂
x
b
⟩
implies (using the symmetry of the scalar product and swapping the order of partial differentiations)
∂
g
j
k
∂
x
i
+
∂
g
k
i
∂
x
j
−
∂
g
i
j
∂
x
k
=
2
⟨
∂
2
Ψ
→
∂
x
i
∂
x
j
;
∂
Ψ
→
∂
x
k
⟩
and yields the Christoffel symbols for the LeviCivita connection in terms of the metric:
g
k
l
Γ
k
i
j
=
1
2
(
∂
g
j
l
∂
x
i
+
∂
g
l
i
∂
x
j
−
∂
g
i
j
∂
x
l
)
.
For a very simple example that captures the essence of the description above, draw a circle on a flat sheet of paper. Travel around the circle at a constant speed. The derivative of your velocity, your acceleration vector, always points radially inward. Roll this sheet of paper into a cylinder. Now the (Euclidean) derivative of your velocity has a component that sometimes points inward toward the axis of the cylinder depending on whether you're near a solstice or an equinox. (At the point of the circle when you are moving parallel to the axis, there is no inward acceleration. At the point, the velocity is along the cylinder's bend, the inward acceleration is maximum.) This is the (Euclidean) normal component. The covariant derivative component is the component parallel to the cylinder's surface, and is the same as that before you rolled the sheet into a cylinder.
A covariant derivative is a (Koszul) connection on the tangent bundle and other tensor bundles. Thus it has a certain behavior on vector fields that extends that of the usual differential on functions. It also extends in a unique way to the duals of vector fields (i.e., covector fields), and to arbitrary tensor fields, that ensures compatibility with the tensor product and trace operations (tensor contraction).
Given a point p of the manifold, a real function f on the manifold, and a tangent vector v at p, the covariant derivative of f at p along v is the scalar at p, denoted
(
∇
v
f
)
p
, that represents the principal part of the change in the value of f when the argument of f is changed by the infinitesimal displacement vector v. (This is the differential of f evaluated against the vector v.) Formally, there is a differentiable curve
ϕ
:
[
−
1
,
1
]
→
M
such that
ϕ
(
0
)
=
p
and
ϕ
′
(
0
)
=
v
, and the covariant derivative of f at p is defined by
(
∇
v
f
)
p
=
(
f
∘
ϕ
)
′
(
0
)
=
lim
t
→
0
t
−
1
(
f
(
ϕ
(
t
)
)
−
f
(
p
)
)
.
When v is a vector field, the covariant derivative
∇
v
f
is the function that associates with each point p in the common domain of f and v the scalar
(
∇
v
f
)
p
. This coincides with the usual Lie derivative of f along the vector field v.
A covariant derivative
∇
at a point p in a smooth manifold assigns a tangent vector
(
∇
v
u
)
p
to each pair
(
u
,
v
)
, consisting of a tangent vector v at p and vector field u defined in a neighborhood of p, such that the following properties hold (for any vectors v, x and y at p, vector fields u and w defined in a neighborhood of p, scalar values g and h at p, and scalar function f defined in a neighborhood of p):

(
∇
v
u
)
p
is linear in
v
so
(
∇
g
x
+
h
y
u
)
p
=
(
∇
x
u
)
p
g
+
(
∇
y
u
)
p
h

(
∇
v
u
)
p
is additive in
u
so
(
∇
v
(
u
+
w
)
)
p
=
(
∇
v
u
)
p
+
(
∇
v
w
)
p

(
∇
v
u
)
p
obeys the product rule, i.e.,
(
∇
v
(
f
u
)
)
p
=
f
(
p
)
(
∇
v
u
)
p
+
(
∇
v
f
)
p
u
p
where
∇
v
f
is defined above.
If u and v are both vector fields defined over a common domain, then
∇
v
u
denotes the vector field whose value at each point p of the domain is the tangent vector
(
∇
v
u
)
p
. Note that
(
∇
v
u
)
p
depends not only on the value of v at p but also on values of u in an infinitesimal neighbourhood of p because of the last property, the product rule.
Given a field of covectors (or oneform)
α
defined in a neighborhood of p, its covariant derivative
(
∇
v
α
)
p
is defined in a way to make the resulting operation compatible with tensor contraction and the product rule. That is,
(
∇
v
α
)
p
is defined as the unique oneform at p such that the following identity is satisfied for all vector fields u in a neighborhood of p
(
∇
v
α
)
p
(
u
p
)
=
∇
v
(
α
(
u
)
)
p
−
α
p
(
(
∇
v
u
)
p
)
.
The covariant derivative of a covector field along a vector field v is again a covector field.
Once the covariant derivative is defined for fields of vectors and covectors it can be defined for arbitrary tensor fields by imposing the following identities for every pair of tensor fields
φ
and
ψ
in a neighborhood of the point p:
∇
v
(
φ
⊗
ψ
)
p
=
(
∇
v
φ
)
p
⊗
ψ
(
p
)
+
φ
(
p
)
⊗
(
∇
v
ψ
)
p
,
and for
φ
and
ψ
of the same valence
∇
v
(
φ
+
ψ
)
p
=
(
∇
v
φ
)
p
+
(
∇
v
ψ
)
p
.
The covariant derivative of a tensor field along a vector field v is again a tensor field of the same type.
Explicitly, let T be a tensor field of type (p, q). Consider T to be a differentiable multilinear map of smooth sections α^{1}, α^{2}, ..., α^{q} of the cotangent bundle T^{∗}M and of sections X_{1}, X_{2}, ... X_{p} of the tangent bundle TM, written T(α^{1}, α^{2}, ..., X_{1}, X_{2}, ...) into R. The covariant derivative of T along Y is given by the formula
(
∇
Y
T
)
(
α
1
,
α
2
,
…
,
X
1
,
X
2
,
…
)
=
Y
(
T
(
α
1
,
α
2
,
…
,
X
1
,
X
2
,
…
)
)
−
T
(
∇
Y
α
1
,
α
2
,
…
,
X
1
,
X
2
,
…
)
−
T
(
α
1
,
∇
Y
α
2
,
…
,
X
1
,
X
2
,
…
)
−
…
−
T
(
α
1
,
α
2
,
…
,
∇
Y
X
1
,
X
2
,
…
)
−
T
(
α
1
,
α
2
,
…
,
X
1
,
∇
Y
X
2
,
…
)
−
…
Given coordinate functions
x
i
,
i
=
0
,
1
,
2
,
…
,
any tangent vector can be described by its components in the basis
e
i
=
∂
∂
x
i
.
The covariant derivative of a basis vector along a basis vector is again a vector and so can be expressed as a linear combination
Γ
k
e
k
. To specify the covariant derivative it is enough to specify the covariant derivative of each basis vector field
e
j
along
e
i
.
∇
e
i
e
j
=
Γ
k
i
j
e
k
,
the coefficients
Γ
i
j
k
are called Christoffel symbols of the second kind. Then using the rules in the definition, we find that for general vector fields
v
=
v
i
e
i
and
u
=
u
j
e
j
we get
∇
v
u
=
∇
v
i
e
i
u
j
e
j
=
v
i
∇
e
i
u
j
e
j
=
v
i
u
j
∇
e
i
e
j
+
v
i
e
j
∇
e
i
u
j
=
v
i
u
j
Γ
k
i
j
e
k
+
v
i
∂
u
j
∂
x
i
e
j
so
∇
v
u
=
(
v
i
u
j
Γ
k
i
j
+
v
i
∂
u
k
∂
x
i
)
e
k
The first term in this formula is responsible for "twisting" the coordinate system with respect to the covariant derivative and the second for changes of components of the vector field u. In particular
∇
e
j
u
=
∇
j
u
=
(
∂
u
i
∂
x
j
+
u
k
Γ
i
j
k
)
e
i
In words: the covariant derivative is the usual derivative along the coordinates with correction terms which tell how the coordinates change.
For covectors similarly we have
∇
e
j
θ
=
(
∂
θ
i
∂
x
j
−
θ
k
Γ
k
i
j
)
e
∗
i
where
e
∗
i
(
e
j
)
=
δ
i
j
.
The covariant derivative of a type (r, s) tensor field along
e
c
is given by the expression:
(
∇
e
c
T
)
a
1
…
a
r
b
1
…
b
s
=
∂
∂
x
c
T
a
1
…
a
r
b
1
…
b
s
+
Γ
a
1
d
c
T
d
a
2
…
a
r
b
1
…
b
s
+
⋯
+
Γ
a
r
d
c
T
a
1
…
a
r
−
1
d
b
1
…
b
s
Or, in words: take the partial derivative of the tensor and add: a
+
Γ
a
i
d
c
for every upper index
a
i
, and a
−
Γ
d
b
i
c
for every lower index
b
i
.
If instead of a tensor, one is trying to differentiate a tensor density (of weight +1), then you also add a term
−
Γ
d
d
c
T
a
1
…
a
r
b
1
…
b
s
.
If it is a tensor density of weight W, then multiply that term by W. For example,
−
g
is a scalar density (of weight +1), so we get:
(
−
g
)
;
c
=
(
−
g
)
,
c
−
−
g
Γ
d
d
c
where semicolon ";" indicates covariant differentiation and comma "," indicates partial differentiation. Incidentally, this particular expression is equal to zero, because the covariant derivative of a function solely of the metric is always zero.
For a scalar field
ϕ
, covariant differentiation is simply partial differentiation:
ϕ
;
a
≡
∂
a
ϕ
For a contravariant vector field
λ
a
, we have:
λ
a
;
b
≡
∂
b
λ
a
+
Γ
a
b
c
λ
c
For a covariant vector field
λ
a
, we have:
λ
a
;
c
≡
∂
c
λ
a
−
Γ
b
c
a
λ
b
For a type (2,0) tensor field
τ
a
b
, we have:
τ
a
b
;
c
≡
∂
c
τ
a
b
+
Γ
a
c
d
τ
d
b
+
Γ
b
c
d
τ
a
d
For a type (0,2) tensor field
τ
a
b
, we have:
τ
a
b
;
c
≡
∂
c
τ
a
b
−
Γ
d
c
a
τ
d
b
−
Γ
d
c
b
τ
a
d
For a type (1,1) tensor field
τ
a
b
, we have:
τ
a
b
;
c
≡
∂
c
τ
a
b
+
Γ
a
c
d
τ
d
b
−
Γ
d
c
b
τ
a
d
The notation above is meant in the sense
τ
a
b
;
c
≡
(
∇
e
c
τ
)
a
b
One must always remember that covariant derivatives do not commute, i.e.
λ
a
;
b
c
≠
λ
a
;
c
b
. It is actually easy to show that:
λ
a
;
b
c
−
λ
a
;
c
b
=
R
d
a
b
c
λ
d
where
R
d
a
b
c
is the Riemann tensor. Similarly,
λ
a
;
b
c
−
λ
a
;
c
b
=
−
R
a
d
b
c
λ
d
and
τ
a
b
;
c
d
−
τ
a
b
;
d
c
=
−
R
a
e
c
d
τ
e
b
−
R
b
e
c
d
τ
a
e
The latter can be shown by taking (without loss of generality) that
τ
a
b
=
λ
a
μ
b
.
In textbooks on physics, the covariant derivative is sometimes simply stated in terms of its components in this equation.
Often a notation is used in which the covariant derivative is given with a semicolon, while a normal partial derivative is indicated by a comma. In this notation we write the same as:
∇
e
j
v
=
d
e
f
v
s
;
j
e
s
v
i
;
j
=
v
i
,
j
+
v
k
Γ
i
k
j
Once again this shows that the covariant derivative of a vector field is not just simply obtained by differentiating to the coordinates
v
i
,
j
, but also depends on the vector v itself through
v
k
Γ
i
k
j
.
In some older texts (notably Adler, Bazin & Schiffer, Introduction to General Relativity), the covariant derivative is denoted by a double pipe:
∇
e
j
v
=
d
e
f
v
i


j
Since the covariant derivative
∇
X
T
of a tensor field
T
at a point
p
depends only on value of the vector field
X
at
p
one can define the covariant derivative along a smooth curve
γ
(
t
)
in a manifold:
D
t
T
=
∇
γ
˙
(
t
)
T
.
Note that the tensor field
T
only needs to be defined on the curve
γ
(
t
)
for this definition to make sense.
In particular,
γ
˙
(
t
)
is a vector field along the curve
γ
itself. If
∇
γ
˙
(
t
)
γ
˙
(
t
)
vanishes then the curve is called a geodesic of the covariant derivative. If the covariant derivative is the LeviCivita connection of a certain metric then the geodesics for the connection are precisely the geodesics of the metric that are parametrised by arc length.
The derivative along a curve is also used to define the parallel transport along the curve.
Sometimes the covariant derivative along a curve is called absolute or intrinsic derivative.
A covariant derivative introduces an extra geometric structure on a manifold which allows vectors in neighboring tangent spaces to be compared. This extra structure is necessary because there is no canonical way to compare vectors from different vector spaces, as is necessary for this generalization of the directional derivative. There is however another generalization of directional derivatives which is canonical: the Lie derivative. The Lie derivative evaluates the change of one vector field along the flow of another vector field. Thus, one must know both vector fields in an open neighborhood. The covariant derivative on the other hand introduces its own change for vectors in a given direction, and it only depends on the vector direction at a single point, rather than a vector field in an open neighborhood of a point. In other words, the covariant derivative is linear (over C^{∞}(M)) in the direction argument, while the Lie derivative is linear in neither argument.
Note that the antisymmetrized covariant derivative ∇_{u}v − ∇_{v}u, and the Lie derivative L_{u}v differ by the torsion of the connection, so that if a connection is torsion free, then its antisymmetrization is the Lie derivative.