![]() | ||
The light front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where
Contents
- Discrete light cone quantization
- Supersymmetric discrete light cone quantization
- Transverse lattice
- Basis Light Front Quantization
- Light front coupled cluster method
- Renormalization group
- Similarity transformations
- Renormalization group procedure for effective particles
- Bethe Salpeter equation
- Vacuum structure and zero modes
- References
The solution of the LFQCD Hamiltonian eigenvalue equation will utilize the available mathematical methods of quantum mechanics and contribute to the development of advanced computing techniques for large quantum systems, including nuclei. For example, in the discretized light-cone quantization method (DLCQ), periodic conditions are introduced such that momenta are discretized and the size of the Fock space is limited without destroying Lorentz invariance. Solving a quantum field theory is then reduced to diagonalizing a large sparse Hermitian matrix. The DLCQ method has been successfully used to obtain the complete spectrum and light-front wave functions in numerous model quantum field theories such as QCD with one or two space dimensions for any number of flavors and quark masses. An extension of this method to supersymmetric theories, SDLCQ, takes advantage of the fact that the light-front Hamiltonian can be factorized as a product of raising and lowering ladder operators. SDLCQ has provided new insights into a number of supersymmetric theories including direct numerical evidence for a supergravity/super-Yang—Mills duality conjectured by Maldacena.
It is convenient to work in a Fock basis
with
Discrete light-cone quantization
A systematic approach to discretization of the eigenvalue problem is the DLCQ method originally suggested by Pauli and Brodsky. In essence it is the replacement of integrals by trapezoidal approximations, with equally-spaced intervals in the longitudinal and transverse momenta
corresponding to periodic boundary conditions on the intervals
Most DLCQ calculations are done without zero modes. However, in principle, any DLCQ basis with periodic boundary conditions may include them as constrained modes, dependent on the other modes with nonzero momentum. The constraint comes from the spatial average of the Euler-Lagrange equation for the field. This constraint equation can be difficult to solve, even for the simplest theories. However, an approximate solution can be found, consistent with the underlying approximations of the DLCQ method itself. This solution generates the effective zero-mode interactions for the light-front Hamiltonian.
Calculations in the massive sector that are done without zero modes will usually yield the correct answer. The neglect of zero modes merely worsens the convergence. One exception is that of cubic scalar theories, where the spectrum extends to minus infinity. A DLCQ calculation without zero modes will require careful extrapolation to detect this infinity, whereas a calculation that includes zero modes yields the correct result immediately. The zero modes are avoided if one uses antiperiodic boundary conditions.
Supersymmetric discrete light-cone quantization
The supersymmetric form of DLCQ (SDLCQ) is specifically designed to maintain supersymmetry in the discrete approximation. Ordinary DLCQ violates supersymmetry by terms that do not survive the continuum limit. The SDLCQ construction discretizes the supercharge
In addition to calculations of spectra, this technique can be used to calculate expectation values. One such quantity, a correlator
Transverse lattice
The transverse lattice method brings together two powerful ideas in quantum field theory: light-front Hamiltonian quantization and lattice gauge theory. Lattice gauge theory is a very popular means of regulating for calculation the gauge theories that describe all visible matter in the universe; in particular, it manifestly demonstrates the linear confinement of QCD that holds quarks and gluons inside the protons and neutrons of the atomic nucleus. In general, to obtain solutions of a quantum field theory, with its continuously infinite degrees of freedom, one must put kinematical cutoffs or other restrictions on the space of quantum states. To remove the errors this introduces, one may then extrapolate these cutoffs, provided a continuum limit exists, and/or renormalize observables to account for degrees of freedom above the cutoff. For the purposes of Hamiltonian quantization, one must have a continuous time direction. In the case of light-front Hamiltonian quantization, in addition to continuous light-front time
Most practical calculations performed with transverse lattice gauge theory have utilized one further ingredient: the color-dielectric expansion. A dielectric formulation is one in which the gauge group elements, whose generators are the gluon fields in the case of QCD, are replaced by collective (smeared, blocked, etc.) variables which represent an average over their fluctuations on short distance scales. These dielectric variables are massive, carry color, and form an effective gauge field theory with classical action minimized at zero field, meaning that color flux is expelled from the vacuum at the classical level. This maintains the triviality of the light-front vacuum structure, but arises only for a low momentum cutoff on the effective theory (corresponding to transverse lattice spacings of order 1/2 fm in QCD). As a result, the effective cutoff Hamiltonian is initially poorly constrained. The color-dielectric expansion, together with requirements of Lorentz symmetry restoration, has nevertheless been successfully used to organize the interactions in the Hamiltonian in a way suitable for practical solution. The most accurate spectrum of large-
Basis Light-Front Quantization
The basis light-front quantization (BLFQ) approach uses expansions in products of single-particle basis functions to represent the Fock-state wave functions. Typically, the longitudinal (
The first application of BLFQ to QED solved for the electron in a two-dimensional transverse confining cavity and showed how the anomalous magnetic moment behaved as a function of the strength of the cavity. The second application of BLFQ to QED solved for the electron's anomalous magnetic moment in free space and demonstrated agreement with the Schwinger moment in the appropriate limit.
The extension of BLFQ to the time-dependent regime, namely, time-dependent BLFQ (tBLFQ) is straightforward and is currently under active development. The goal of tBLFQ is to solve light-front field theory in real-time (with or without time-dependent background fields). The typical application areas include intense lasers (see Light-front quantization#Intense lasers}) and relativistic heavy-ion collisions.
Light-front coupled-cluster method
The light-front coupled cluster (LFCC) method is a particular form of truncation for the infinite coupled system of integral equations for light-front wave functions. The system of equations that comes from the field-theoretic Schrödinger equation also requires regularization, to make the integral operators finite. The traditional Fock-space truncation of the system, where the allowed number of particles is limited, typically disrupts the regularization by removing infinite parts that would otherwise cancel against parts that are retained. Although there are ways to circumvent this, they are not completely satisfactory.
The LFCC method avoids these difficulties by truncating the set of equations in a very different way. Instead of truncating the number of particles, it truncates the way in which wave functions are related to each other; the wave functions of higher Fock states are determined by the lower-state wave functions and the exponentiation of an operator
The truncation made is a truncation of
Here
The truncation of
The mathematics of the LFCC method has its origin in the many-body coupled cluster method used in nuclear physics and quantum chemistry. The physics is, however, quite different. The many-body method works with a state of a large number of particles and uses the exponentiation of
The computation of physical observables from matrix elements of operators requires some care. Direct computation would require an infinite sum over Fock space. One can instead borrow from the many-body coupled cluster method a construction that computes expectation values from right and left eigenstates. This construction can be extended to include off-diagonal matrix elements and gauge projections. Physical quantities can then be computed from the right and left LFCC eigenstates.
Renormalization group
Renormalization concepts, especially the renormalization group methods in quantum theories and statistical mechanics, have a long history and a very broad scope. The concepts of renormalization that appear useful in theories quantized in the front form of dynamics are essentially of two types, as in other areas of theoretical physics. The two types of concepts are associated with two types of theoretical tasks involved in applications of a theory. One task is to calculate observables (values of operationally defined quantities) in a theory that is unambiguously defined. The other task is to define a theory unambiguously. This is explained below.
Since the front form of dynamics aims at explaining hadrons as bound states of quarks and gluons, and the binding mechanism is not describable using perturbation theory, the definition of a theory needed in this case cannot be limited to perturbative expansions. For example, it is not sufficient to construct a theory using regularization of loop integrals order-by-order and correspondingly redefining the masses, coupling constants, and field normalization constants also order-by-order. In other words, one needs to design the Minkowski space-time formulation of a relativistic theory that is not based on any a priori perturbative scheme. The front form of Hamiltonian dynamics is perceived by many researchers as the most suitable framework for this purpose among the known options.
The desired definition of a relativistic theory involves calculations of as many observables as one must use in order to fix all the parameters that appear in the theory. The relationship between the parameters and observables may depend on the number of degrees of freedom that are included in the theory.
For example, consider virtual particles in a candidate formulation of the theory. Formally, special relativity requires that the range of momenta of the particles is infinite because one can change the momentum of a particle by an arbitrary amount through a change of frame of reference. If the formulation is not to distinguish any inertial frame of reference, the particles must be allowed to carry any value of momentum. Since the quantum field modes corresponding to particles with different momenta form different degrees of freedom, the requirement of including infinitely many values of momentum means that one requires the theory to involve infinitely many degrees of freedom. But for mathematical reasons, being forced to use computers for sufficiently precise calculations, one has to work with a finite number of degrees of freedom. One must limit the momentum range by some cutoff.
Setting up a theory with a finite cutoff for mathematical reasons, one hopes that the cutoff can be made sufficiently large to avoid its appearance in observables of physical interest, but in local quantum field theories that are of interest in hadronic physics the situation is not that simple. Namely, particles of different momenta are coupled through the dynamics in a nontrivial way, and the calculations aiming at predicting observables yield results that depend on the cutoffs. Moreover, they do so in a diverging fashion.
There may be more cutoff parameters than just for momentum. For example, one may assume that the volume of space is limited, which would interfere with translation invariance of a theory, or assume that the number of virtual particles is limited, which would interfere with the assumption that every virtual particle may split into more virtual particles. All such restrictions lead to a set of cutoffs that becomes a part of a definition of a theory.
Consequently, every result of a calculation for any observable
However, experiments provide values of observables that characterize natural processes irrespective of the cutoffs in a theory used to explain them. If the cutoffs do not describe properties of nature and are introduced merely for making a theory computable, one needs to understand how the dependence on
The two types of concepts of renormalization mentioned above are associated with the following two questions:
The renormalization group concept associated with the first question predates the concept associated with the second question. Certainly, if one were in possession of a good answer to the second question, the first question could also be answered. In the absence of a good answer to the second question, one may wonder why any specific choice of parameters and their cutoff dependence could secure cutoff independence of all observables
The renormalization group concept associated with the first question above relies on the circumstance that some finite set
In this way of thinking, one can expect that in a theory with
The question remains, however, why fixing the cutoff dependence of
Typically, the set
The renormalization group concept associated with the second question above is conceived to explain how it may be so that the concept of renormalization group associated with the first question can make sense, instead of being at best a successful recipe to deal with divergences in perturbative calculations. Namely, to answer the second question, one designs a calculation (see below) that identifies the required set of parameters to define the theory, the starting point being some specific initial assumption, such as some local Lagrangian density which is a function of field variables and needs to be modified by including all the required parameters. Once the required set of parameters is known, one can establish a set of observables that are sufficient to define the cutoff dependence of the required set. The observables can have any finite scale
Thus, not only the possibility that a renormalization group of the first type may exist can be understood, but also the alternative situations are found where the set of required cutoff dependent parameters does not have to be finite. Predictive power of latter theories results from known relationships among the required parameters and options to establish all the relevant ones.
The renormalization group concept of the second kind is associated with the nature of the mathematical computation used to discover the set of parameters
In summary, one obtains a trajectory of a point in a space of dimension equal to the number of required parameters and motion along the trajectory is described by transformations that form new kind of a group. Different initial points might lead to different trajectories, but if the steps are self-similar and reduce to a multiple action of one and the same transformation, say
Suppose that
Both concepts of the renormalization group can be considered in quantum theories constructed using the front form of dynamics. The first concept allows one to play with a small set of parameters and seek consistency, which is a useful strategy in perturbation theory if one knows from other approaches what to expect. In particular, one may study new perturbative features that appear in the front form of dynamics, since it differs from the instant form. The main difference is that the front variables
Similarity transformations
A glimpse of the difficulties of the procedure of reducing a cutoff
where
The first equation can be used to evaluate
This expression allows one to write an equation for
where
The equation for
In QCD, which is asymptotically free, one indeed has
In any case, when one reduces the cutoff
Fortunately, one can use instead a change of basis. Namely, it is possible to define a procedure in which the basis states are rotated in such a way that the matrix elements of
As a result, one obtains in the rotated basis an effective Hamiltonian matrix eigenvalue problem in which the dependence on cutoff
In the case of the front-form Hamiltonian for QCD, a perturbative version of the similarity renormalization group procedure is outlined by Wilson et al. Further discussion of computational methods stemming from the similarity renormalization group concept is provided in the next section.
Renormalization group procedure for effective particles
The similarity renormalization group procedure, discussed in #Similarity transformations, can be applied to the problem of describing bound states of quarks and gluons using QCD according to the general computational scheme outlined by Wilson et al. and illustrated in a numerically soluble model by Glazek and Wilson. Since these works were completed, the method has been applied to various physical systems using a weak-coupling expansion. More recently, similarity has evolved into a computational tool called the renormalization group procedure for effective particles, or RGPEP. In principle, the RGPEP is now defined without a need to refer to some perturbative expansion. The most recent explanation of the RGPEP is given by Glazek in terms of an elementary and exactly solvable model for relativistic fermions that interact through a mass mixing term of arbitrary strength in their Hamiltonian.
The effective particles can be seen as resulting from a dynamical transformation akin to the Melosh transformation from current to constituent quarks. Namely, the RGPEP transformation changes the bare quanta in a canonical theory to the effective quanta in an equivalent effective theory with a Hamiltonian that has the energy bandwidth
The effective particles are introduced through a transformation
where
which means that the same dynamics is expressed in terms of different operators for different values of
In principle, if one had solved the RGPEP equation for the front form Hamiltonian of QCD exactly, the eigenvalue problem could be written using effective quarks and gluons corresponding to any
Bethe-Salpeter equation
The Bethe-Salpeter amplitude, which satisfies the Bethe-Salpeter equation (see the reviews by Nakanishi ), when projected on the light-front plane, results in the light-front wave function. The meaning of the ``light-front projection" is the following. In the coordinate space, the Bethe-Salpeter amplitude is a function of two four-dimensional coordinates
(the momentum space Bethe-Salpeter amplitude
In this way, we can find the light-front wave function
given in Light front quantization#Angular momentum.
The Bethe-Salpeter amplitude includes the propagators of the external particles, and, therefore, it is singular. It can be represented in the form of the Nakanishi integral through a non-singular function
where
It turns out that the masses of a two-body system, found from the Bethe-Salpeter equation for
Vacuum structure and zero modes
One of the advantages of light-front quantization is that the empty state, the so-called perturbative vacuum, is the physical vacuum. The massive states of a theory can then be built on this lowest state without having any contributions from vacuum structure, and the wave functions for these massive states do not contain vacuum contributions. This occurs because each
However, certain aspects of some theories are associated with vacuum structure. For example, the Higgs mechanism of the Standard Model relies on spontaneous symmetry breaking in the vacuum of the theory. The usual Higgs vacuum expectation value in the instant form is replaced by
Some aspects of vacuum structure in light-front quantization can be analyzed by studying properties of massive states. In particular, by studying the appearance of degeneracies among the lowest massive states, one can determine the critical coupling strength associated with spontaneous symmetry breaking. One can also use a limiting process, where the analysis begins in equal-time quantization but arrives in light-front coordinates as the limit of some chosen parameter. A much more direct approach is to include modes of zero longitudinal momentum (zero modes) in a calculation of a nontrivial light-front vacuum built from these modes; the Hamiltonian then contains effective interactions that determine the vacuum structure and provide for zero-mode exchange interactions between constituents of massive states.