Trisha Shetty (Editor)

Exterior algebra

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Exterior algebra

In mathematics, the exterior algebra of a vector space (or, more generally, of a module over a commutative ring) is an associative algebra that contains the vector space (or module), and such that the square of any element of the vector space (or module) is zero. The exterior algebra is universal in the sense that every embedding of the vector space or module in an algebra that has these properties may be factored through the exterior algebra.

Contents

The multiplication operation of the exterior algebra is called the exterior product or wedge product, and is denoted with the symbol . The term "exterior" comes from the exterior product of two vectors not being a vector, while the term "wedge" comes from the shape of the multiplication symbol. The exterior algebra is also named Grassmann algebra after Hermann Grassmann, who introduced it as extended algebras. The exterior product should not be confused with the outer product, which is the tensor product of vectors.

The exterior product of two vectors is sometimes called a 2-blade, which is in turn a bivector. More generally, the exterior product of any number k of vectors is sometimes called a k-blade. Given a vector space (or a module) V, its exterior algebra is denoted Λ ( V ) . The vector subspace generated by the k-blades is known as the kth exterior power of V, and denoted Λ k ( V ) . The exterior algebra Λ ( V ) is the direct sum of the Λ k ( V ) as modules with the exterior product as additional structure. The exterior product makes the exterior algebra a graded algebra, and is alternating.

The exterior algebra is used in geometry to study areas, volumes, and their higher-dimensional analogs. An exterior algebra has the structure of a bialgebra, naturally induced by the dual space of V. The magnitude of the exterior product of two vectors in a Euclidean vector space gives the area of the parallelogram defined by these vectors, and, similarly, the magnitude of the exterior product of three vectors gives the volume of the parallelepiped that they define, while the product also gives the orientation of these quantities. In this context, the Euclidean structure induces on the exterior algebra a richer structure of a Hopf algebra. The exterior algebra is also used in multivariable calculus, as the differential forms of higher degree belong to the exterior algebra of the differential forms of degree one.

Areas in the plane

The Cartesian plane R2 is a vector space equipped with a basis consisting of a pair of unit vectors

e 1 = [ 1 0 ] , e 2 = [ 0 1 ] .

Suppose that

v = [ a b ] = a e 1 + b e 2 , w = [ c d ] = c e 1 + d e 2

are a pair of given vectors in R2, written in components. There is a unique parallelogram having v and w as two of its sides. The area of this parallelogram is given by the standard determinant formula:

Area = | det [ v w ] | = | det [ a c b d ] | = | a d b c | .

Consider now the exterior product of v and w:

v w = ( a e 1 + b e 2 ) ( c e 1 + d e 2 ) = a c e 1 e 1 + a d e 1 e 2 + b c e 2 e 1 + b d e 2 e 2 = ( a d b c ) e 1 e 2

where the first step uses the distributive law for the exterior product, and the last uses the fact that the exterior product is alternating, and in particular e2e1 = −(e1e2). Note that the coefficient in this last expression is precisely the determinant of the matrix [v w]. The fact that this may be positive or negative has the intuitive meaning that v and w may be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such an area is called the signed area of the parallelogram: the absolute value of the signed area is the ordinary area, and the sign determines its orientation.

The fact that this coefficient is the signed area is not an accident. In fact, it is relatively easy to see that the exterior product should be related to the signed area if one tries to axiomatize this area as an algebraic construct. In detail, if A(v, w) denotes the signed area of the parallelogram of which the pair of vectors v and w form two adjacent sides, then A must satisfy the following properties:

  1. A(jv, kw) = jkA(v, w) for any real numbers j and k, since rescaling either of the sides rescales the area by the same amount (and reversing the direction of one of the sides reverses the orientation of the parallelogram).
  2. A(v, v) = 0, since the area of the degenerate parallelogram determined by v (i.e., a line segment) is zero.
  3. A(w, v) = −A(v, w), since interchanging the roles of v and w reverses the orientation of the parallelogram.
  4. A(v + jw, w) = A(v, w), for real j, since adding a multiple of w to v affects neither the base nor the height of the parallelogram and consequently preserves its area.
  5. A(e1, e2) = 1, since the area of the unit square is one.

With the exception of the last property, the exterior product of two vectors satisfies the same properties as the area. In a certain sense, the exterior product generalizes the final property by allowing the area of a parallelogram to be compared to that of any "standard" chosen parallelogram in a parallel plane (here, the one with sides e1 and e2). In other words, the exterior product provides a basis-independent formulation of area.

Cross and triple products

For vectors in R3, the exterior algebra is closely related to the cross product and triple product. Using the standard basis {e1, e2, e3}, the exterior product of a pair of vectors

u = u 1 e 1 + u 2 e 2 + u 3 e 3

and

v = v 1 e 1 + v 2 e 2 + v 3 e 3

is

u v = ( u 1 v 2 u 2 v 1 ) ( e 1 e 2 ) + ( u 3 v 1 u 1 v 3 ) ( e 3 e 1 ) + ( u 2 v 3 u 3 v 2 ) ( e 2 e 3 )

where {e1e2, e3e1, e2e3} is the basis for the three-dimensional space Λ2(R3). The coefficients above are the same as those in the usual definition of the cross product of vectors in three dimensions, the only difference being that the exterior product is not an ordinary vector, but instead is a 2-vector.

Bringing in a third vector

w = w 1 e 1 + w 2 e 2 + w 3 e 3 ,

the exterior product of three vectors is

u v w = ( u 1 v 2 w 3 + u 2 v 3 w 1 + u 3 v 1 w 2 u 1 v 3 w 2 u 2 v 1 w 3 u 3 v 2 w 1 ) ( e 1 e 2 e 3 )

where e1e2e3 is the basis vector for the one-dimensional space Λ3(R3). The scalar coefficient is the triple product of the three vectors.

The cross product and triple product in three dimensions each admit both geometric and algebraic interpretations. The cross product u × v can be interpreted as a vector which is perpendicular to both u and v and whose magnitude is equal to the area of the parallelogram determined by the two vectors. It can also be interpreted as the vector consisting of the minors of the matrix with columns u and v. The triple product of u, v, and w is geometrically a (signed) volume. Algebraically, it is the determinant of the matrix with columns u, v, and w. The exterior product in three dimensions allows for similar interpretations. In fact, in the presence of a positively oriented orthonormal basis, the exterior product generalizes these notions to higher dimensions.

Formal definitions and algebraic properties

The exterior algebra Λ(V) over a vector space V over a field K is defined as the quotient algebra of the tensor algebra T(V) by the two-sided ideal I generated by all elements of the form xx for xV (i.e. all tensors that can be expressed as the tensor product of any vector in V by itself). Symbolically,

Λ ( V ) := T ( V ) / I .

The exterior product ∧ of two elements of Λ(V) is defined by

α β = α β + I ,

where the + I means that we derive the tensor product in the usual way and find the coset (or equivalence class) in the quotient with respect to the ideal I. Equivalently, any two tensors that differ by only an element of the ideal are considered to be the same element in the exterior algebra.

As T0 = K, T1 = V, and ( T 0 ( V ) T 1 ( V ) ) I = { 0 } , the inclusions of K and V in T(V) induce injections of K and V into Λ(V). These injections are commonly considered as inclusions, and called natural embeddings, natural injections or natural inclusions.

Alternating product

The exterior product is alternating on elements of V, which means that xx = 0 for all xV, by the above construction. It follows that the product is also anticommutative on elements of V, for supposing that x, yV,

0 = ( x + y ) ( x + y ) = x x + x y + y x + y y = x y + y x

hence

x y = ( y x ) .

More generally, if σ is a permutation of the integers [1, ..., k], and x1, x2, ..., xk are elements of V, it follows that

x σ ( 1 ) x σ ( 2 ) x σ ( k ) = sgn ( σ ) x 1 x 2 x k ,

where sgn(σ) is the signature of the permutation σ.

In particular, if xi = xj for some ij, then the following generalization of the alternating property also holds:

x 1 x 2 x k = 0.

Exterior power

The kth exterior power of V, denoted Λk(V), is the vector subspace of Λ(V) spanned by elements of the form

x 1 x 2 x k , x i V , i = 1 , 2 , , k .

If α ∈ Λk(V), then α is said to be a k-vector. If, furthermore, α can be expressed as an exterior product of k elements of V, then α is said to be decomposable. Although decomposable k-vectors span Λk(V), not every element of Λk(V) is decomposable. For example, in R4, the following 2-vector is not decomposable:

α = e 1 e 2 + e 3 e 4 .

(This is a symplectic form, since αα ≠ 0.)

Basis and dimension

If the dimension of V is n and {e1, ..., en} is a basis of V, then the set

{ e i 1 e i 2 e i k 1 i 1 < i 2 < < i k n }

is a basis for Λk(V). The reason is the following: given any exterior product of the form

v 1 v k ,

every vector vj can be written as a linear combination of the basis vectors ei; using the bilinearity of the exterior product, this can be expanded to a linear combination of exterior products of those basis vectors. Any exterior product in which the same basis vector appears more than once is zero; any exterior product in which the basis vectors do not appear in the proper order can be reordered, changing the sign whenever two basis vectors change places. In general, the resulting coefficients of the basis k-vectors can be computed as the minors of the matrix that describes the vectors vj in terms of the basis ei.

By counting the basis elements, the dimension of Λk(V) is equal to a binomial coefficient:

dim Λ k ( V ) = ( n k ) .

In particular, Λk(V) = {0} for k > n.

Any element of the exterior algebra can be written as a sum of k-vectors. Hence, as a vector space the exterior algebra is a direct sum

Λ ( V ) = Λ 0 ( V ) Λ 1 ( V ) Λ 2 ( V ) Λ n ( V )

(where by convention Λ0(V) = K and Λ1(V) = V), and therefore its dimension is equal to the sum of the binomial coefficients, which is 2n.

Rank of a k-vector

If α ∈ Λk(V), then it is possible to express α as a linear combination of decomposable k-vectors:

α = α ( 1 ) + α ( 2 ) + + α ( s )

where each α(i) is decomposable, say

α ( i ) = α 1 ( i ) α k ( i ) , i = 1 , 2 , , s .

The rank of the k-vector α is the minimal number of decomposable k-vectors in such an expansion of α. This is similar to the notion of tensor rank.

Rank is particularly important in the study of 2-vectors (Sternberg 1974, §III.6) (Bryant et al. 1991). The rank of a 2-vector α can be identified with half the rank of the matrix of coefficients of α in a basis. Thus if ei is a basis for V, then α can be expressed uniquely as

α = i , j a i j e i e j

where aij = −aji (the matrix of coefficients is skew-symmetric). The rank of the matrix aij is therefore even, and is twice the rank of the form α.

In characteristic 0, the 2-vector α has rank p if and only if

α α p 0

and

α α p + 1 = 0.

Graded structure

The exterior product of a k-vector with a p-vector is a (k + p)-vector, once again invoking bilinearity. As a consequence, the direct sum decomposition of the preceding section

Λ ( V ) = Λ 0 ( V ) Λ 1 ( V ) Λ 2 ( V ) Λ n ( V )

gives the exterior algebra the additional structure of a graded algebra, that is

Λ k ( V ) Λ p ( V ) Λ k + p ( V ) .

Moreover, if K is the basis field, we have

Λ 0 ( V ) = K

and

Λ 1 ( V ) = V .

The exterior product is graded anticommutative, meaning that if α ∈ Λk(V) and β ∈ Λp(V), then

α β = ( 1 ) k p β α .

In addition to studying the graded structure on the exterior algebra, Bourbaki (1989) studies additional graded structures on exterior algebras, such as those on the exterior algebra of a graded module (a module that already carries its own gradation).

Universal property

Let V be a vector space over the field K. Informally, multiplication in Λ(V) is performed by manipulating symbols and imposing a distributive law, an associative law, and using the identity vv = 0 for vV. Formally, Λ(V) is the "most general" algebra in which these rules hold for the multiplication, in the sense that any unital associative K-algebra containing V with alternating multiplication on V must contain a homomorphic image of Λ(V). In other words, the exterior algebra has the following universal property:

To construct the most general algebra that contains V and whose multiplication is alternating on V, it is natural to start with the most general associative algebra that contains V, the tensor algebra T(V), and then enforce the alternating property by taking a suitable quotient. We thus take the two-sided ideal I in T(V) generated by all elements of the form vv for v in V, and define Λ(V) as the quotient

Λ ( V ) = T ( V ) / I  

(and use as the symbol for multiplication in Λ(V)). It is then straightforward to show that Λ(V) contains V and satisfies the above universal property.

As a consequence of this construction, the operation of assigning to a vector space V its exterior algebra Λ(V) is a functor from the category of vector spaces to the category of algebras.

Rather than defining Λ(V) first and then identifying the exterior powers Λk(V) as certain subspaces, one may alternatively define the spaces Λk(V) first and then combine them to form the algebra Λ(V). This approach is often used in differential geometry and is described in the next section.

Generalizations

Given a commutative ring R and an R-module M, we can define the exterior algebra Λ(M) just as above, as a suitable quotient of the tensor algebra T(M). It will satisfy the analogous universal property. Many of the properties of Λ(M) also require that M be a projective module. Where finite dimensionality is used, the properties further require that M be finitely generated and projective. Generalizations to the most common situations can be found in (Bourbaki 1989).

Exterior algebras of vector bundles are frequently considered in geometry and topology. There are no essential differences between the algebraic properties of the exterior algebra of finite-dimensional vector bundles and those of the exterior algebra of finitely generated projective modules, by the Serre–Swan theorem. More general exterior algebras can be defined for sheaves of modules.

Alternating operators

Given two vector spaces V and X, an alternating operator from Vk to X is a multilinear map

f : V k X

such that whenever v1, ..., vk are linearly dependent vectors in V, then

f ( v 1 , , v k ) = 0

The map

w : V k Λ k ( V )

which associates to k vectors from V their exterior product, i.e. their corresponding k-vector, is also alternating. In fact, this map is the "most general" alternating operator defined on Vk: given any other alternating operator f : VkX, there exists a unique linear map φ : Λk(V) → X with f = φw. This universal property characterizes the space Λk(V) and can serve as its definition.

Alternating multilinear forms

The above discussion specializes to the case when X = K, the base field. In this case an alternating multilinear function

f : V k K  

is called an alternating multilinear form. The set of all alternating multilinear forms is a vector space, as the sum of two such maps, or the product of such a map with a scalar, is again alternating. By the universal property of the exterior power, the space of alternating forms of degree k on V is naturally isomorphic with the dual vector space (ΛkV). If V is finite-dimensional, then the latter is naturally isomorphic to Λk(V). In particular, the dimension of the space of alternating maps from Vk to K is n choose k.

Under this identification, the exterior product takes a concrete form: it produces a new anti-symmetric map from two given ones. Suppose ω : VkK and η : VmK are two anti-symmetric maps. As in the case of tensor products of multilinear maps, the number of variables of their exterior product is the sum of the numbers of their variables. It is defined as follows:

ω η = ( k + m ) ! k ! m ! Alt ( ω η ) ,

where the alternation Alt of a multilinear map is defined to be the average of the sign-adjusted values over all the permutations of its variables:

Alt ( ω ) ( x 1 , , x k ) = 1 k ! σ S k sgn ( σ ) ω ( x σ ( 1 ) , , x σ ( k ) ) .

This definition of the exterior product is well-defined even if the field K has finite characteristic, if one considers an equivalent version of the above that does not use factorials or any constants:

ω η ( x 1 , , x k + m ) = σ S h k , m sgn ( σ ) ω ( x σ ( 1 ) , , x σ ( k ) ) η ( x σ ( k + 1 ) , , x σ ( k + m ) ) ,

where here Shk,mSk+m is the subset of (k,m) shuffles: permutations σ of the set {1, 2, ..., k + m} such that σ(1) < σ(2) < ... < σ(k), and σ(k + 1) < σ(k + 2) < ... < σ(k + m).

Bialgebra structure

In formal terms, there is a correspondence between the graded dual of the graded algebra Λ(V) and alternating multilinear forms on V. The exterior algebra (as well as the symmetric algebra) inherits a bialgebra structure, and, indeed, a Hopf algebra structure, from the tensor algebra. See the article on tensor algebras for a detailed treatment of the topic.

The exterior product of multilinear forms defined above is dual to a coproduct defined on Λ(V), giving the structure of a coalgebra. The coproduct is a linear function Δ : Λ(V) → Λ(V) ⊗ Λ(V) which is given by

Δ ( v ) = 1 v + v 1

on elements vV. The symbol 1 stands for the unit element of the field K. Recall that K ⊂ Λ(V), so that the above really does lie in Λ(V) ⊗ Λ(V). This definition of the coproduct is extended to the full space Λ(V) by (linear) homomorphism. That is, for v,wV, one has, by definition, the homomorphism

Δ ( v w ) = Δ ( v ) Δ ( w )

The correct form of this homomorphism is not what one might naively write, but has to be the one carefully defined in the coalgebra article. In this case, one obtains

Δ ( v w ) = 1 ( v w ) + v w w v + ( v w ) 1.

Extending to the full space Λ(V), one has, in general,

Δ ( x 1 x k ) = Δ ( x 1 ) Δ ( x k )

Expanding this out in detail, one obtains the following expression on decomposable elements:

Δ ( x 1 x k ) = p = 0 k σ S h ( p + 1 , k p ) sgn ( σ ) ( x σ ( 0 ) x σ ( p ) ) ( x σ ( p + 1 ) x σ ( k ) ) .

where the second summation is taken over all (p+1, kp)-shuffles. The above is written with a notational trick, to keep track of the field element 1: the trick is to write x 0 = 1 , and this is shuffled into various locations during the expansion of the sum over shuffles. The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements x k is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right.

Observe that the coproduct preserves the grading of the algebra. That is, one has that

Δ : Λ k ( V ) p = 0 k Λ p ( V ) Λ k p ( V )

The tensor symbol ⊗ used in this section should be understood with some caution: it is not the same tensor symbol as the one being used in the definition of the alternating product. Intuitively, it is perhaps easiest to think it as just another, but different, tensor product: it is still (bi-)linear, as tensor products should be, but it is the product that is appropriate for the definition of a bialgebra, that is, for creating the object Λ(V) ⊗ Λ(V). Any lingering doubt can be shaken by pondering the equalities (1 ⊗ v) ∧ (1 ⊗ w) = 1 ⊗ (vw) and (v ⊗ 1) ∧ (1 ⊗ w) = vw, which follow from the definition of the coalgebra, as opposed to naive manipulations involving the tensor and wedge symbols. This distinction is developed in greater detail in the article on tensor algebras. Here, there is much less of a problem, in that the alternating product Λ clearly corresponds to multiplication in the bialgebra, leaving the symbol ⊗ free for use in the definition of the bialgebra. In practice, this presents no particular problem, as long as one avoids the fatal trap of replacing alternating sums of ⊗ by the wedge symbol, with one exception. One can construct an alternating product from ⊗, with the understanding that it works in a different space. Immediately below, an example is given: the alternating product for the dual space can be given in terms of the coproduct. The construction of the bialgebra here parallels the construction in the tensor algebra article almost exactly, except for the need to correctly track the alternating signs for the exterior algebra.

In terms of the coproduct, the exterior product on the dual space is just the graded dual of the coproduct:

( α β ) ( x 1 x k ) = ( α β ) ( Δ ( x 1 x k ) )

where the tensor product on the right-hand side is of multilinear linear maps (extended by zero on elements of incompatible homogeneous degree: more precisely, αβ = ε ∘ (αβ) ∘ Δ, where ε is the counit, as defined presently).

The counit is the homomorphism ε : Λ(V) → K that returns the 0-graded component of its argument. The coproduct and counit, along with the exterior product, define the structure of a bialgebra on the exterior algebra.

With an antipode defined on homogeneous elements by S ( x ) = ( 1 ) ( deg x + 1 2 ) x , the exterior algebra is furthermore a Hopf algebra.

Interior product

Suppose that V is finite-dimensional. If V denotes the dual space to the vector space V, then for each αV, it is possible to define an antiderivation on the algebra Λ(V),

i α : Λ k V Λ k 1 V .

This derivation is called the interior product with α, or sometimes the insertion operator, or contraction by α.

Suppose that w ∈ ΛkV. Then w is a multilinear mapping of V to K, so it is defined by its values on the k-fold Cartesian product V × V × ... × V. If u1, u2, ..., uk−1 are k − 1 elements of V, then define

( i α w ) ( u 1 , u 2 , , u k 1 ) = w ( α , u 1 , u 2 , , u k 1 ) .

Additionally, let iαf = 0 whenever f is a pure scalar (i.e., belonging to Λ0V).

Axiomatic characterization and properties

The interior product satisfies the following properties:

  1. For each k and each αV,(By convention, Λ−1V = {0}.)
  2. If v is an element of V (= Λ1V), then iαv = α(v) is the dual pairing between elements of V and elements of V.
  3. For each αV, iα is a graded derivation of degree −1:

These three properties are sufficient to characterize the interior product as well as define it in the general infinite-dimensional case.

Further properties of the interior product include:

  • i α i α = 0.
  • i α i β = i β i α .
  • Hodge duality

    Suppose that V has finite dimension n. Then the interior product induces a canonical isomorphism of vector spaces

    Λ k ( V ) Λ n ( V ) Λ n k ( V )

    by the recursive definition

    i α β = i β i α .

    In the geometrical setting, a non-zero element of the top exterior power Λn(V) (which is a one-dimensional vector space) is sometimes called a volume form (or orientation form, although this term may sometimes lead to ambiguity). Relative to a given volume form σ, the isomorphism is given explicitly by

    Λ k ( V ) Λ n k ( V ) : α i α σ .

    If, in addition to a volume form, the vector space V is equipped with an inner product identifying V with V, then the resulting isomorphism is called the Hodge dual (or more commonly the Hodge star operator)

    : Λ k ( V ) Λ n k ( V ) .

    The composition of with itself maps Λk(V) → Λk(V) and is always a scalar multiple of the identity map. In most applications, the volume form is compatible with the inner product in the sense that it is an exterior product of an orthonormal basis of V. In this case,

    : Λ k ( V ) Λ k ( V ) = ( 1 ) k ( n k ) + q i d

    where id is the identity mapping, and the inner product has metric signature (p, q)p pluses and q minuses.

    Inner product

    For V a finite-dimensional space, an inner product on V defines an isomorphism of V with V, and so also an isomorphism of ΛkV with (ΛkV). The pairing between these two spaces also takes the form of an inner product. On decomposable k-vectors,

    v 1 v k , w 1 w k = det ( v i , w j ) ,

    the determinant of the matrix of inner products. In the special case vi = wi, the inner product is the square norm of the k-vector, given by the determinant of the Gramian matrix (⟨vi, vj⟩). This is then extended bilinearly (or sesquilinearly in the complex case) to a non-degenerate inner product on ΛkV. If ei, i = 1, 2, ..., n, form an orthonormal basis of V, then the vectors of the form

    e i 1 e i k , i 1 < < i k ,

    constitute an orthonormal basis for Λk(V).

    With respect to the inner product, exterior multiplication and the interior product are mutually adjoint. Specifically, for v ∈ Λk−1(V), w ∈ Λk(V), and xV,

    x v , w = v , i x w

    where xV is the linear functional defined by

    x ( y ) = x , y

    for all yV. This property completely characterizes the inner product on the exterior algebra.

    Indeed, more generally for v ∈ Λkl(V), w ∈ Λk(V), and x ∈ Λl(V), iteration of the above adjoint properties gives

    x v , w = v , i x w

    where now x ∈ Λl(V) ≃ (Λl(V)) is the dual l-vector defined by

    x ( y ) = x , y

    for all y ∈ Λl(V).

    Functoriality

    Suppose that V and W are a pair of vector spaces and f : VW is a linear transformation. Then, by the universal construction, there exists a unique homomorphism of graded algebras

    Λ ( f ) : Λ ( V ) Λ ( W )

    such that

    Λ ( f ) | Λ 1 ( V ) = f : V = Λ 1 ( V ) W = Λ 1 ( W ) .

    In particular, Λ(f) preserves homogeneous degree. The k-graded components of Λ(f) are given on decomposable elements by

    Λ ( f ) ( x 1 x k ) = f ( x 1 ) f ( x k ) .

    Let

    Λ k ( f ) = Λ ( f ) | Λ k ( V ) : Λ k ( V ) Λ k ( W ) .

    The components of the transformation Λ(k) relative to a basis of V and W is the matrix of k × k minors of f. In particular, if V = W and V is of finite dimension n, then Λn(f) is a mapping of a one-dimensional vector space Λn to itself, and is therefore given by a scalar: the determinant of f.

    Exactness

    If

    0 U V W 0

    is a short exact sequence of vector spaces, then

    0 Λ 1 ( U ) Λ ( V ) Λ ( V ) Λ ( W ) 0

    is an exact sequence of graded vector spaces as is

    0 Λ ( U ) Λ ( V ) .

    Direct sums

    In particular, the exterior algebra of a direct sum is isomorphic to the tensor product of the exterior algebras:

    Λ ( V W ) Λ ( V ) Λ ( W ) .

    This is a graded isomorphism; i.e.,

    Λ k ( V W ) p + q = k Λ p ( V ) Λ q ( W ) .

    Slightly more generally, if

    0 U V W 0

    is a short exact sequence of vector spaces then Λk(V) has a filtration

    0 = F 0 F 1 F k F k + 1 = Λ k ( V )

    with quotients : F p + 1 / F p = Λ k p ( U ) Λ p ( W ) . In particular, if U is 1-dimensional then

    0 U Λ k 1 ( W ) Λ k ( V ) Λ k ( W ) 0

    is exact, and if W is 1-dimensional then

    0 Λ k ( U ) Λ k ( V ) Λ k 1 ( U ) W 0

    is exact.

    Alternating tensor algebra

    If K is a field of characteristic 0, then the exterior algebra of a vector space V can be canonically identified with the vector subspace of T(V) consisting of antisymmetric tensors. Recall that the exterior algebra is the quotient of T(V) by the ideal I generated by xx.

    Let Tr(V) be the space of homogeneous tensors of degree r. This is spanned by decomposable tensors

    v 1 v r , v i V .

    The antisymmetrization (or sometimes the skew-symmetrization) of a decomposable tensor is defined by

    Alt ( v 1 v r ) = 1 r ! σ S r sgn ( σ ) v σ ( 1 ) v σ ( r )

    where the sum is taken over the symmetric group of permutations on the symbols {1, ..., r}. This extends by linearity and homogeneity to an operation, also denoted by Alt, on the full tensor algebra T(V). The image Alt(T(V)) is the alternating tensor algebra, denoted A(V). This is a vector subspace of T(V), and it inherits the structure of a graded vector space from that on T(V). It carries an associative graded product ^ defined by

    t   ^   s = Alt ( t s ) .

    Although this product differs from the tensor product, the kernel of Alt is precisely the ideal I (again, assuming that K has characteristic 0), and there is a canonical isomorphism

    A ( V ) Λ ( V ) .

    Index notation

    Suppose that V has finite dimension n, and that a basis e1, ..., en of V is given. then any alternating tensor t ∈ Ar(V) ⊂ Tr(V) can be written in index notation as

    t = t i 1 i 2 i r e i 1 e i 2 e i r ,

    where ti1⋅⋅⋅ir is completely antisymmetric in its indices.

    The exterior product of two alternating tensors t and s of ranks r and p is given by

    t   ^   s = 1 ( r + p ) ! σ S r + p sgn ( σ ) t i σ ( 1 ) i σ ( r ) s i σ ( r + 1 ) i σ ( r + p ) e i 1 e i 2 e i r + p .

    The components of this tensor are precisely the skew part of the components of the tensor product st, denoted by square brackets on the indices:

    ( t   ^   s ) i 1 i r + p = t [ i 1 i r s i r + 1 i r + p ] .

    The interior product may also be described in index notation as follows. Let t = t i 0 i 1 i r 1 be an antisymmetric tensor of rank r. Then, for αV, iαt is an alternating tensor of rank r − 1, given by

    ( i α t ) i 1 i r 1 = r j = 0 n α j t j i 1 i r 1 .

    where n is the dimension of V.

    Linear algebra

    In applications to linear algebra, the exterior product provides an abstract algebraic manner for describing the determinant and the minors of a matrix. For instance, it is well known that the magnitude of the determinant of a square matrix is equal to the volume of the parallelotope whose sides are the columns of the matrix. This suggests that the determinant can be defined in terms of the exterior product of the column vectors. Likewise, the k × k minors of a matrix can be defined by looking at the exterior products of column vectors chosen k at a time. These ideas can be extended not just to matrices but to linear transformations as well: the magnitude of the determinant of a linear transformation is the factor by which it scales the volume of any given reference parallelotope. So the determinant of a linear transformation can be defined in terms of what the transformation does to the top exterior power. The action of a transformation on the lesser exterior powers gives a basis-independent way to talk about the minors of the transformation.

    Technical details: Definitions

    Let V be an n-dimensional vector space over field K with basis { e 1 , , e n } .

  • For A End ( V ) , define Λ k A End ( Λ k V ) on simple tensors by
  • and expand the definition linearly to all tensors. More generally, we can define Λ p A k End ( Λ p V ) , ( p k ) on simple tensors byi.e. choose k components on which A would act, then sum up all results obtained from different choices. If p < k , define Λ p A k = 0 . Since Λ n V is 1-dimensional with basis e 1 e n , we can identify Λ n A k with the unique number κ K satisfying
  • For φ End ( Λ p V ) , define the exterior transpose φ T End ( Λ n p V ) to be the unique operator satisfying
  • For A End ( V ) , define det A = Λ n A n , Tr ( A ) = Λ n A 1 , adj A = ( Λ n 1 A n 1 ) T . These definitions is equivalent to the other versions.
  • Basic Properties

    All results obtained from other definitions of the determinant, trace and adjoint can be obtained from this definition (since these definitions are equivalent). Here are some basic properties related to these new definitions:

  • ( ) T is K -linear.
  • ( A B ) T = B T A T .
  • We have a canonical isomorphism ψ : End ( Λ k V ) End ( Λ n k V ) , A A T . (However, there is no canonical isomorphism between Λ k V and Λ n k V ).
  • Tr ( Λ k A ) = Λ n A k . The entries of the transposed matrix of Λ k A are k × k -minors of A .
  • k n 1 , p k , A End ( V ) , q = 0 p ( Λ n k A p q ) T ( Λ k A q ) = ( Λ n A p ) Id End ( V ) . In particular,
  • and hence
  • ( Λ n 1 A p ) T = q = 0 p ( Λ n A p q ) ( A ) q = q = 0 p Tr ( Λ p q A ) ( A ) q . In particular,
  • Tr ( Λ k adj A ) = Λ n ( adj A ) k = ( det A ) k 1 ( Λ n A n k ) = ( det A ) k 1 Tr ( Λ n k A ) .
  • Tr ( ( Λ n 1 A k ) T ) = ( n k ) Λ n A p = ( n k ) Tr ( Λ p A ) .
  • The characteristic polynomial ch A ( t ) of A End ( V ) can be given by
  • Similarly,

    Leverrier's Algorithm

    Λ n A k is the coefficient of ( t ) n k term in the characteristic polynomial. They also appear in the expressions of ( Λ n 1 A p ) T and Λ n ( adj A ) k . Leverrier's Algorithm is an economical way of computing Λ n A k and Λ n 1 A k :

    Physics

    In physics, many quantities are naturally represented by alternating operators. For example, if the motion of a charged particle is described by velocity and acceleration vectors in four-dimensional spacetime, then normalization of the velocity vector requires that the electromagnetic force must be an alternating operator on the velocity. Its six degrees of freedom are identified with the electric and magnetic fields.

    Linear geometry

    The decomposable k-vectors have geometric interpretations: the bivector uv represents the plane spanned by the vectors, "weighted" with a number, given by the area of the oriented parallelogram with sides u and v. Analogously, the 3-vector uvw represents the spanned 3-space weighted by the volume of the oriented parallelepiped with edges u, v, and w.

    Projective geometry

    Decomposable k-vectors in ΛkV correspond to weighted k-dimensional linear subspaces of V. In particular, the Grassmannian of k-dimensional subspaces of V, denoted Grk(V), can be naturally identified with an algebraic subvariety of the projective space PkV). This is called the Plücker embedding.

    Differential geometry

    The exterior algebra has notable applications in differential geometry, where it is used to define differential forms. A differential form at a point of a differentiable manifold is an alternating multilinear form on the tangent space at the point. Equivalently, a differential form of degree k is a linear functional on the k-th exterior power of the tangent space. As a consequence, the exterior product of multilinear forms defines a natural exterior product for differential forms. Differential forms play a major role in diverse areas of differential geometry.

    In particular, the exterior derivative gives the exterior algebra of differential forms on a manifold the structure of a differential algebra. The exterior derivative commutes with pullback along smooth mappings between manifolds, and it is therefore a natural differential operator. The exterior algebra of differential forms, equipped with the exterior derivative, is a cochain complex whose cohomology is called the de Rham cohomology of the underlying manifold and plays a vital role in the algebraic topology of differentiable manifolds.

    Representation theory

    In representation theory, the exterior algebra is one of the two fundamental Schur functors on the category of vector spaces, the other being the symmetric algebra. Together, these constructions are used to generate the irreducible representations of the general linear group; see fundamental representation.

    Superspace

    The exterior algebra over the complex numbers is the archetypal example of a superalgebra, which plays a fundamental role in physical theories pertaining to fermions and supersymmetry. A single element of the exterior algebra is called a supernumber or Grassmann number. The exterior algebra itself is then just a one-dimensional superspace: it is just the set of all of the points in the exterior algebra. The topology on this space is essentially the weak topology, the open sets being the cylinder sets. An n-dimensional superspace is just the n-fold product of exterior algebras.

    Lie algebra homology

    Let L be a Lie algebra over a field K, then it is possible to define the structure of a chain complex on the exterior algebra of L. This is a K-linear mapping

    : Λ p + 1 L Λ p L

    defined on decomposable elements by

    ( x 1 x p + 1 ) = 1 p + 1 j < ( 1 ) j + + 1 [ x j , x ] x 1 x ^ j x ^ x p + 1 .

    The Jacobi identity holds if and only if ∂∂ = 0, and so this is a necessary and sufficient condition for an anticommutative nonassociative algebra L to be a Lie algebra. Moreover, in that case ΛL is a chain complex with boundary operator ∂. The homology associated to this complex is the Lie algebra homology.

    Homological algebra

    The exterior algebra is the main ingredient in the construction of the Koszul complex, a fundamental object in homological algebra.

    History

    The exterior algebra was first introduced by Hermann Grassmann in 1844 under the blanket term of Ausdehnungslehre, or Theory of Extension. This referred more generally to an algebraic (or axiomatic) theory of extended quantities and was one of the early precursors to the modern notion of a vector space. Saint-Venant also published similar ideas of exterior calculus for which he claimed priority over Grassmann.

    The algebra itself was built from a set of rules, or axioms, capturing the formal aspects of Cayley and Sylvester's theory of multivectors. It was thus a calculus, much like the propositional calculus, except focused exclusively on the task of formal reasoning in geometrical terms. In particular, this new development allowed for an axiomatic characterization of dimension, a property that had previously only been examined from the coordinate point of view.

    The import of this new theory of vectors and multivectors was lost to mid 19th century mathematicians, until being thoroughly vetted by Giuseppe Peano in 1888. Peano's work also remained somewhat obscure until the turn of the century, when the subject was unified by members of the French geometry school (notably Henri Poincaré, Élie Cartan, and Gaston Darboux) who applied Grassmann's ideas to the calculus of differential forms.

    A short while later, Alfred North Whitehead, borrowing from the ideas of Peano and Grassmann, introduced his universal algebra. This then paved the way for the 20th century developments of abstract algebra by placing the axiomatic notion of an algebraic system on a firm logical footing.

    References

    Exterior algebra Wikipedia