Neha Patil (Editor)

Random indexing

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

Random indexing is a dimension reduction method and computational framework for Distributional semantics, based on the insight that very-high-dimensional Vector Space Model implementations are impractical, that models need not grow in dimensionality when new items (e.g. new terminology) is encountered, and that a high-dimensional model can be projected into a space of lower dimensionality without compromising L2 distance metrics if the resulting dimensions are chosen appropriately, which is the original point of the random projection approach to dimension reduction first formulated as the Johnson–Lindenstrauss lemma. Locality-sensitive hashing has some of the same starting points. Random indexing, as used in representation of language, originates from the work of Pentti Kanerva on Sparse distributed memory, and can be described as an incremental formulation of a random projection.

It can be also verified that random indexing is a random projection technique for the construction of Euclidean spaces---i.e. L2 normed vector spaces. In Euclidean spaces, random projections are elucidated using the Johnson–Lindenstrauss lemma.

TopSig extends the Random Indexing model to produce bit vectors for comparison with the Hamming distance similarity function. It is used for improving the performance of information retrieval and document clustering. In a similar line of research, Random Manhattan Integer Indexing is proposed for improving the performance of the methods that employ the Manhattan distance between text units. Many random indexing methods primarily generate similarity from co-occurrence of items in a corpus. Reflexive Random Indexing generates similarity from co-occurrence and from shared occurrence with other items.

References

Random indexing Wikipedia