Kernel methods are a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in a computationally efficient way and allow algorithms to easily swap functions of varying complexity.
Contents
- History
- Notation
- Regularization perspective
- Gaussian process perspective
- Separable
- From regularization literature
- From Bayesian literature
- Non separable
- Implementation
- Bayesian perspective
- References
In typical machine learning algorithms, these functions produce a scalar output. Recent development of kernel methods for functions with vector-valued output is due, at least in part, to interest in simultaneously solving related problems. Kernels which capture the relationship between the problems allow them to borrow strength from each other. Algorithms of this type include multi-task learning (also called multi-output learning or vector-valued learning), transfer learning, and co-kriging. Multi-label classification can be interpreted as mapping inputs to (binary) coding vectors with length equal to the number of classes.
In Gaussian processes, kernels are called covariance functions. Multiple-output functions correspond to considering multiple processes. See Bayesian interpretation of regularization for the connection between the two perspectives.
History
The history of learning vector-valued functions is closely linked to transfer learning, a broad term that refers to systems that learn by transferring knowledge between different domains. The fundamental motivation for transfer learning in the field of machine learning was discussed in a NIPS-95 workshop on “Learning to Learn,” which focused on the need for lifelong machine learning methods that retain and reuse previously learned knowledge. Research on transfer learning has attracted much attention since 1995 in different names: learning to learn, lifelong learning, knowledge transfer, inductive transfer, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, metalearning, and incremental/cumulative learning. Interest in learning vector-valued functions was particularly sparked by multitask learning, a framework which tries to learn multiple, possibly different tasks simultaneously.
Much of the initial research in multitask learning in the machine learning community was algorithmic in nature, and applied to methods such as neural networks, decision trees and k-nearest neighbors in the 1990s. The use of probabilistic models and Gaussian processes was pioneered and largely developed in the context of geostatistics, where prediction over vector-valued output data is known as cokriging. Geostatistical approaches to multivariate modeling are mostly formulated around the linear model of coregionalization (LMC), a generative approach for developing valid covariance functions that has been used for multivariate regression and in statistics for computer emulation of expensive multivariate computer codes. The regularization and kernel theory literature for vector-valued functions followed in the 2000s. While the Bayesian and regularization perspectives were developed independently, they are in fact closely related.
Notation
In this context, the supervised learning problem is to learn the function
In general, each component of (
Here, for simplicity in the notation, we assume the number and sample space of the data for each output are the same.
Regularization perspective
From the regularization perspective, the problem is to learn
Note, the matrix-valued kernel
Gaussian process perspective
The estimator of the vector-valued regularization framework can also be derived from a Bayesian viewpoint using Gaussian process methods in the case of a finite dimensional Reproducing kernel Hilbert space. The derivation is similar to the scalar-valued case Bayesian interpretation of regularization. The vector-valued function
where
For a set of inputs
where
where
Equations for
where
Separable
A simple, but broadly applicable, class of multi-output kernels can be separated into the product of a kernel on the input-space and a kernel representing the correlations among the outputs:
In matrix form:
For a slightly more general form, adding several of these kernels yields sum of separable kernels (SoS kernels).
From regularization literature
Derived from regularizer
One way of obtaining
Mixed-effect regularizer
where:
where
This regularizer is a combination of limiting the complexity of each component of the estimator (
Cluster-based regularizer
where:
where
This regularizer divides the components into
Graph regularizer
where
where
Note,
Learned from data
Several approaches to learning
From Bayesian literature
Linear model of coregionalization (LMC)
In LMC, outputs are expressed as linear combinations of independent random functions such that the resulting covariance function (over all inputs and outputs) is a valid positive semidefinite function. Assuming
where
where the functions
where each
Intrinsic coregionalization model (ICM)
The ICM is a simplified version of the LMC, with
where
In this case, the coefficients
and the kernel matrix for multiple outputs becomes
Semiparametric latent factor model (SLFM)
Another simplified version of the LMC is the semiparametric latent factor model (SLFM), which corresponds to setting
Non-separable
While simple, the structure of separable kernels can be too limiting for some problems.
Notable examples of non-separable kernels in the regularization literature include:
In the Bayesian perspective, LMC produces a separable kernel because the output functions evaluated at a point
Implementation
When implementing an algorithm using any of the kernels above, practical considerations of tuning the parameters and ensuring reasonable computation time must be considered.
Regularization perspective
Approached from the regularization perspective, parameter tuning is similar to the scalar-valued case and can generally be accomplished with cross validation. Solving the required linear system is typically expensive in memory and time. If the kernel is separable, a coordinate transform can convert
Bayesian perspective
There are many works related to parameter estimation for Gaussian processes. Some methods such as maximization of the marginal likelihood (also known as evidence approximation, type II maximum likelihood, empirical Bayes), and least squares give point estimates of the parameter vector
The main computational problem in the Bayesian viewpoint is the same as the one appearing in regularization theory of inverting the matrix
This step is necessary for computing the marginal likelihood and the predictive distribution. For most proposed approximation methods to reduce computation, the computational efficiency gained is independent of the particular method employed (e.g. LMC, process convolution) used to compute the multi-output covariance matrix. A summary of different methods for reducing computational complexity in multi-output Gaussian processes is presented in.