In computer science, learning vector quantization (LVQ), is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems.
Contents
Overview
LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all Hebbian learning-based approach. It is a precursor to self-organizing maps (SOM) and related to neural gas, and to the k-Nearest Neighbor algorithm (k-NN). LVQ was invented by Teuvo Kohonen.
An LVQ system is represented by prototypes
An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain. LVQ systems can be applied to multi-class classification problems in a natural way. It is used in a variety of practical applications. See http://liinwww.ira.uka.de/bibliography/Neural/SOM.LVQ.html for an extensive bibliography.
A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009) and references therein.
LVQ can be a source of great help in classifying text documents.
Algorithm
Below follows an informal description.
The algorithm consists of 3 basic steps. The algorithm's input is:
The algorithm's flow is:
- For next input
X → L find the neuronW m → d ( X → , W m → ) gets its minimum value, whered is the metric used ( Euclidean, etc. ). - Update
W m → W m → X →
W m → ← W m → + η ⋅ ( X → − W m → ) . - While there are vectors left in
L go to step 1, else terminate.
Note:
A more formal description can be found here: http://jsalatas.ictpro.gr/implementation-of-competitive-learning-networks-for-weka/