Trisha Shetty (Editor)

Limited memory BFGS

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Limited-memory BFGS

Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning.

Contents

Like the original BFGS, L-BFGS uses an estimation to the inverse Hessian matrix to steer its search through variable space, but where BFGS stores a dense n×n approximation to the inverse Hessian (n being the number of variables in the problem), L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its resulting linear memory requirement, the L-BFGS method is particularly well suited for optimization problems with a large number of variables. Instead of the inverse Hessian Hk, L-BFGS maintains a history of the past m updates of the position x and gradient ∇f(x), where generally the history size m can be small (often m<10). These updates are used to implicitly do operations requiring the Hk-vector product.

Algorithm

L-BFGS shares many features with other quasi-Newton algorithms, but is very different in how the matrix-vector multiplication for finding the search direction is carried out d k = H k g k , where g k is the current derivative and H k is the inverse of the Hessian matrix. There are multiple published approaches using a history of updates to form this direction vector. Here, we give a common approach, the so-called "two loop recursion."

We'll take as given x k , the position at the k -th iteration, and g k f ( x k ) where f is the function being minimized, and all vectors are column vectors. We also assume that we have stored the last m updates of the form s k = x k + 1 x k and y k = g k + 1 g k . We'll define ρ k = 1 y k T s k , and H k 0 will be the 'initial' approximate of the inverse Hessian that our estimate at iteration k begins with. Then we can compute the (uphill) direction as follows:

q = g k For i = k 1 , k 2 , , k m α i = ρ i s i T q q = q α i y i H k 0 = s k 1 T y k 1 / y k 1 T y k 1 z = H k 0 q For i = k m , k m + 1 , , k 1 β i = ρ i y i T z z = z + s i ( α i β i ) Stop with H k g k = z

This formulation is valid whether we are minimizing or maximizing. Note that if we are minimizing, the search direction would be the negative of z (since z is "uphill"), and if we are maximizing, H k 0 should be negative definite rather than positive definite. We would typically do a backtracking line search in the search direction (any line search would be valid, but L-BFGS does not require exact line searches in order to converge).

Commonly, the inverse Hessian H k 0 is represented as a diagonal matrix, so that initially setting z requires only an element-by-element multiplication.

This two loop update only works for the inverse Hessian. Approaches to implementing L-BFGS using the direct approximate Hessian B k have also been developed, as have other means of approximating the inverse Hessian.

Applications

L-BFGS has been called "the algorithm of choice" for fitting log-linear (MaxEnt) models and conditional random fields with 2 -regularization.

Variants

Since BFGS (and hence L-BFGS) is designed to minimize smooth functions without constraints, the L-BFGS algorithm must be modified to handle functions that include non-differentiable components or constraints. A popular class of modifications are called active-set methods, based on the concept of the active set. The idea is that when restricted to a small neighborhood of the current iterate, the function and constraints can be simplified.

L-BFGS-B

The L-BFGS-B algorithm extends L-BFGS to handle simple box constraints (aka bound constraints) on variables; that is, constraints of the form lixiui where li and ui are per-variable constant lower and upper bounds, respectively (for each xi, either or both bounds may be omitted). The method works by identifying fixed and free variables at every step (using a simple gradient method), and then using the L-BFGS method on the free variables only to get higher accuracy, and then repeating the process.

OWL-QN

Orthant-wise limited-memory quasi-Newton (OWL-QN) is an L-BFGS variant for fitting 1 -regularized models, exploiting the inherent sparsity of such models. It minimizes functions of the form

f ( x ) = g ( x ) + C x 1

where g is a differentiable convex loss function. The method is an active-set type method: at each iterate, it estimates the sign of each component of the variable, and restricts the subsequent step to have the same sign. Once the sign is fixed, the non-differentiable x 1 term becomes a smooth linear term which can be handled by L-BFGS. After an L-BFGS step, the method allows some variables to change sign, and repeats the process.

O-LBFGS

Schraudolph et al. present an online approximation to both BFGS and L-BFGS. Similar to stochastic gradient descent, this can be used to reduce the computational complexity by evaluating the error function and gradient on a randomly drawn subset of the overall dataset in each iteration. It has been shown that O-LBFGS has a global almost sure convergence while the online approximation of BFGS (O-BFGS) is not necessarily convergent.

Implementations

An early, open source implementation of L-BFGS in Fortran exists in Netlib as a shar archive [1]. Multiple other open source implementations have been produced as translations of this Fortran code (e.g. java, and python via SciPy). Other implementations exist:

  • fmincon (Matlab optimization toolbox)
  • FMINLBFGS (for Matlab, BSD license)
  • minFunc (also for Matlab)
  • LBFGS-D (in the D programming language))
  • Frequently as part of generic optimization libraries (e.g. Mathematica, FuncLib C# library, and dlib C++ library)
  • The libLBFGS is a C implementation.
  • Maximization in Two-Class Logistic Regression (in Microsoft Azure ML)
  • Implementations of variants

    The L-BFGS-B variant also exists as ACM TOMS algorithm 778. In February 2011, some of the authors of the original L-BFGS-B code posted a major update (version 3.0).

    A reference implementation is available in Fortran 77 (and with a Fortran 90 interface) at the author's website. This version, as well as older versions, has been converted to many other languages, including a Java wrapper for v3.0; Matlab interfaces for v3.0, v2.4, and v2.1; a C++ interface for v2.1; a Python interface for v3.0 as part of scipy.optimize.minimize; an OCaml interface for v2.1 and v3.0; version 2.3 has been converted to C by f2c and is available at this website; and R's optim general-purpose optimizer routine includes L-BFGS-B by using method="L-BFGS-B".

    There exists a complete C++11 rewrite of the L-BFGS-B solver using Eigen3.

    OWL-QN implementations are available in:

  • C++ implementation by its designers, includes the original ICML paper on the algorithm
  • The CRF toolkit Wapiti includes a C implementation
  • libLBFGS
  • References

    Limited-memory BFGS Wikipedia