Kalpana Kalpana (Editor)

Tensor algebra

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In mathematics, the tensor algebra of a vector space V, denoted T(V) or T (V), is the algebra of tensors on V (of any rank) with multiplication being the tensor product. It is the free algebra on V, in the sense of being left adjoint to the forgetful functor from algebras to vector spaces: it is the "most general" algebra containing V, in the sense of the corresponding universal property (see below).

Contents

The tensor algebra is important because many other algebras arise as quotient algebras of T(V). These include the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras.

The tensor algebra also has two coalgebra structures; one simple one, which does not make it a bialgebra, but does lead to the concept of a cofree coalgebra, and a more complicated one, which yields a bialgebra, and can be extended by giving an antipode to create a Hopf algebra structure.

Note: In this article, all algebras are assumed to be unital and associative. The unit is explicitly required to define the coproduct.

Construction

Let V be a vector space over a field K. For any nonnegative integer k, we define the kth tensor power of V to be the tensor product of V with itself k times:

T k V = V k = V V V .

That is, TkV consists of all tensors on V of order k. By convention T0V is the ground field K (as a one-dimensional vector space over itself).

We then construct T(V) as the direct sum of TkV for k = 0,1,2,…

T ( V ) = k = 0 T k V = K V ( V V ) ( V V V ) .

The multiplication in T(V) is determined by the canonical isomorphism

T k V T V T k + V

given by the tensor product, which is then extended by linearity to all of T(V). This multiplication rule implies that the tensor algebra T(V) is naturally a graded algebra with TkV serving as the grade-k subspace. This grading can be extended to a Z grading by appending subspaces T k V = { 0 } for negative integers k.

The construction generalizes in straightforward manner to the tensor algebra of any module M over a commutative ring. If R is a non-commutative ring, one can still perform the construction for any R-R bimodule M. (It does not work for ordinary R-modules because the iterated tensor products cannot be formed.)

Adjunction and universal property

The tensor algebra T(V) is also called the free algebra on the vector space V, and is functorial. As with other free constructions, the functor T is left adjoint to some forgetful functor. In this case, it's the functor which sends each K-algebra to its underlying vector space.

Explicitly, the tensor algebra satisfies the following universal property, which formally expresses the statement that it is the most general algebra containing V:

Any linear transformation f : VA from V to an algebra A over K can be uniquely extended to an algebra homomorphism from T(V) to A as indicated by the following commutative diagram:

Here i is the canonical inclusion of V into T(V) (the unit of the adjunction). One can, in fact, define the tensor algebra T(V) as the unique algebra satisfying this property (specifically, it is unique up to a unique isomorphism), but one must still prove that an object satisfying this property exists.

The above universal property shows that the construction of the tensor algebra is functorial in nature. That is, T is a functor from K-Vect, the category of vector spaces over K, to K-Alg, the category of K-algebras. The functoriality of T means that any linear map from V to W extends uniquely to an algebra homomorphism from T(V) to T(W).

Non-commutative polynomials

If V has finite dimension n, another way of looking at the tensor algebra is as the "algebra of polynomials over K in n non-commuting variables". If we take basis vectors for V, those become non-commuting variables (or indeterminants) in T(V), subject to no constraints beyond associativity, the distributive law and K-linearity.

Note that the algebra of polynomials on V is not T ( V ) , but rather T ( V ) : a (homogeneous) linear function on V is an element of V , for example coordinates x 1 , , x n on a vector space are covectors, as they take in a vector and give out a scalar (the given coordinate of the vector).

Quotients

Because of the generality of the tensor algebra, many other algebras of interest can be constructed by starting with the tensor algebra and then imposing certain relations on the generators, i.e. by constructing certain quotient algebras of T(V). Examples of this are the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras.

Coalgebra

The tensor algebra has two different coalgebra structures. One is compatible with the tensor product, and thus can be extended to a bialgebra, and can be further be extended with an antipode to a Hopf algebra structure. The other structure, although simpler, cannot be extended to a bialgebra. The first structure is developed immediately below; the second structure is given in the section on the cofree coalgebra, further down.

The development provided below can be equally well applied to the exterior algebra, using the wedge symbol in place of the tensor symbol ; a sign must also be kept track of, when permuting elements of the exterior algebra. This correspondence also lasts through the definition of the bialgebra, and on to the definition of a Hopf algebra. That is, the exterior algebra can also be given a Hopf algebra structure.

Similarly, the symmetric algebra can also be given the structure of a Hopf algebra, in exactly the same fashion, by replacing everywhere the tensor product by the symmetrized tensor product S y m , i.e. that product where v S y m w = w S y m v .

In each case, this is possible because the alternating product and the symmetric product S y m obey the required consistency conditions for the definition of a bialgebra and Hopf algebra; this can be explicitly checked in the manner below. Whenever one has a product obeying these consistency conditions, the construction goes thorough; insofar as such a product gave rise to a quotient space, the quotient space inherits the Hopf algebra structure.

In the language of category theory, one says that there is a functor T from the category of K-vector spaces to the category of K-associate algebras. But there is also a functor Λ taking vector spaces to the category of exterior algebras, and a functor Sym taking vector spaces to symmetric algebras. There is a natural map from T to each of these. Verifying that quotienting preserves the Hopf algebra structure is the same as verifying that the maps are indeed natural.

Coproduct

The coalgebra is obtained by defining a coproduct or diagonal operator

Δ : T V T V T V

Here, T V is used as a short-hand for T ( V ) to avoid an explosion of parenthesis. The symbol is used to denote the "external" tensor product, needed for the definition of a coalgebra. It is being used to distinguish it from the "internal" tensor product , which is already "taken" and being used to denote multiplication in the tensor algebra (see the section Multiplication, below, for further clarification on this issue). In order to avoid confusion between these two symbols, most texts will replace by a plain dot, or even drop it altogether, with the understanding that it is implied from context. This then allows the otimes symbol to be used in place of the boxtimes symbol. This is not done below, and the two symbols are used independently and explicitly, so as to show the proper location of each. The result is a bit more verbose, but should be easier to comprehend.

The definition of the operator Δ is most easily built up in stages, first by defining it for elements v V T V and then by homomorphically extending it to the whole algebra. A suitable choice for the coproduct is then

Δ : v v 1 + 1 v

and

Δ : 1 1 1

where 1 K = T 0 V T V is the unit of the field K . By linearity, one obviously has

Δ ( k ) = k ( 1 1 ) = k 1 = 1 k

for all k K . It is straight-forward to verify that this definition satisfies the axioms of a coalgebra: that is, that

( i d T V Δ ) Δ = ( Δ i d T V ) Δ

where i d T V : x x is the identity map on T V . Indeed, one gets

( ( i d T V Δ ) Δ ) ( v ) = v 1 1 + 1 v 1 + 1 1 v

and likewise for the other side. At this point, one could invoke a lemma, and say that Δ extends trivially, by linearity, to all of T V , because T V is a free object and V is a generator of the free algebra, and Δ is a homomorphism. However, it is insightful to provide explicit expressions. So, for v w T 2 V , one has (by definition) the homomorphism

Δ : v w Δ ( v ) Δ ( w )

Expanding, one has

Δ ( v w ) = ( v 1 + 1 v ) ( w 1 + 1 w ) = ( v w ) 1 + v w + w v + 1 ( v w )

In the above expansion, there is no need to ever write 1 v as this is just plain-old scalar multiplication in the algebra; that is, one trivially has that 1 v = 1 v = v .

The extension above preserves the algebra grading. That is,

Δ : T 2 V k = 0 2 T k V T ( 2 k ) V

Continuing in this fashion, one can obtain an explicit expression for the coproduct acting on a homogenous element of order m:

Δ ( v 1 v m ) = Δ ( v 1 ) Δ ( v m ) = p = 0 m ( v 0 v p ) ω ( v p + 1 v m ) = p = 0 m σ S h ( p , m p + 1 ) ( v σ ( 0 ) v σ ( p ) ) ( v σ ( p + 1 ) v σ ( m ) )

where the ω symbol, which should appear as ш, the sha, denotes the shuffle product. This is expressed in the second summation, which is taken over all (p,m-p+1)-shuffles. The above is written with a notational trick, to keep track of the field element 1: the trick is to write v 0 = 1 , and this is shuffled into various locations during the expansion of the sum over shuffles. The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements v k is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right. Any one given shuffle obeys

σ ( 0 ) < < σ ( p )  and  σ ( p + 1 ) < < σ ( m )

As before, the algebra grading is preserved:

Δ : T m V k = 0 m T k V T ( m k ) V

Counit

The counit ϵ is the terminal object of the algebra, and is given by the projection of the field component out from the algebra. This can be written as ϵ : v 0 for v V and ϵ : k k for k K = T 0 V . By homomorphism under the tensor product , this extends to

ϵ : x 0

for all x T 1 V T 2 V It is a straight-forward matter to verify that this counit satisfies the needed axiom for the coalgebra:

( i d ϵ ) Δ = i d = ( ϵ i d ) Δ .

Working this explicitly, one has

( ( i d ϵ ) Δ ) ( x ) = ( i d ϵ ) ( 1 x + x 1 ) = 1 ϵ ( x ) + x ϵ ( 1 ) = 0 + x 1 x

where, for the last step, one has made use of the isomorphism T V K T V , as is appropriate for the defining axiom of the counit.

Bialgebra

A bialgebra defines both multiplication, and comultiplication, and requires them to be compatible.

Multiplication

Multiplication is given by an operator

: T V T V T V

which, in this case, was already given as the "internal" tensor product. That is,

: x y x y

That is, ( x y ) = x y . The above should make it clear why the symbol needs to be used: the was actually one and the same thing as ; and notational sloppiness here would lead to utter chaos. To strengthen this: the tensor product of the tensor algebra corresponds to the multiplication used in the definition of a (bi)-algebra, whereas the tensor product is the one required in the definition of a (bi)-algebra. These two tensor products are not the same thing!

Unit

The unit is the initial object for the algebra, and is exactly as expected:

η : K T V

is just the embedding, so that

η : k k

That the unit is compatible with the tensor product is "trivial": it is just part of the standard definition of the tensor product of vector spaces. That is, k x = k x for field element k and any x T V . More verbosely, the axioms for an associative algebra require the two homomorphisms (or commuting diagrams):

( η i d T V ) = η i d T V = η i d T V

on K T V , and that symmetrically, on T V K , that

( i d T V η ) = i d T V η = i d T V η

where the right-hand side of these equations should be understood as the scalar product.

Compatibility

The unit and counit, and multiplication and comultiplication, all have to satisfy compatibility conditions. It is straightforward to see that

η ϵ = i d K .

Similarly, the unit is compatible with comultiplication:

Δ η = η η η

The above requires the use of the isomorphism K K K in order to work; without this, one looses linearity. Component-wise,

( Δ η ) ( k ) = Δ ( k ) = k ( 1 1 ) k k

with the right-hand side making use of the isomorphism.

Multiplication and the counit are compatible:

( ϵ ) ( x y ) = ϵ ( x y ) = 0

whenever x or y are not elements of K , and otherwise, one has scalar multiplication on the field: k 1 k 2 = k 1 k 2 . The most difficult to verify is the compatibility of multiplication and comultiplication:

Δ = ( ) ( i d τ i d ) ( Δ Δ )

where τ ( x y ) = y x exchanges elements. The compatibility condition only needs to be verified on V T V ; the full compatibility follows as a homomorphic extension to all of T V . The verification is verbose but straight-forward; it is not given here, except for the final result:

( Δ ) ( v w ) = Δ ( v w )

For v , w V , an explicit expression for this was given in the coalgebra section, above.

Hopf algebra

The Hopf algebra adds an antipode to the bialgebra axioms. The antipode S on k K = T 0 V is given by

S ( k ) = 1

This is sometimes called the "anti-identity". The antipode on v V = T 1 V is given by

S ( v ) = v

and on v w T 2 V by

S ( v w ) = S ( w ) S ( v ) = w v

This extends homomorphically to

S ( v 1 v m ) = S ( v m ) S ( v 1 ) = ( 1 ) m v m v 1

Compatibility

Compatibility of the antipode with multiplication and comultiplication requires that

( S i d ) Δ = η ϵ = ( i d S ) Δ

This is straight-forward to verify componentwise on k K :

( ( S i d ) Δ ) ( k ) = ( ( S i d ) ) ( k k ) = ( 1 k ) = 1 k = k

Similarly, on v V :

( ( S i d ) Δ ) ( v ) = ( ( S i d ) ) ( v 1 + 1 v ) = ( v 1 + 1 v ) = v 1 + 1 v = v + v = 0

Recall that

( η ϵ ) ( k ) = η ( k ) = k

and that

( η ϵ ) ( x ) = η ( 0 ) = 0

for any x T V that is not in K .

One may proceed in a similar manner, by homomorphism, verifying that the antipode inserts the appropriate cancellative signs in the shuffle, starting with the compatibility condition on T 2 V and proceeding by induction.

Cofree coalgebra

One may define a different coproduct on the tensor algebra, simpler than the one given above. It is given by

Δ ( v 1 v k ) := j = 0 k ( v 0 v j ) ( v j + 1 v k + 1 )

Here, as before, one uses the notational trick v 0 = v k + 1 = 1 K (recalling that v 1 = v trivially).

This coproduct gives rise to a coalgebra. It describes a coalgebra that is dual to the algebra structure on T(V), where V denotes the dual vector space of linear maps VF. In the same way that the tensor algebra is a free algebra, the corresponding coalgebra is termed (conilpotent) co-free. With the usual product this is not a bialgebra. It can be turned into a bialgebra with the product v i v j = ( i , j ) v i + j where (i,j) denotes the binomial coefficient for ( i + j i ) . This bialgebra is known as the divided power Hopf algebra.

The difference between this, and the other coalgebra is most easily seen in the T 2 V term. Here, one has that

Δ ( v w ) = 1 ( v w ) + v w + ( v w ) 1

for v , w V , which is clearly missing a shuffled term, as compared to before.

References

Tensor algebra Wikipedia