![]() | ||
Locality-sensitive hashing (LSH) reduces the dimensionality of high-dimensional data. LSH hashes input items so that similar items map to the same “buckets” with high probability (the number of buckets being much smaller than the universe of possible input items). LSH differs from conventional and cryptographic hash functions because it aims to maximize the probability of a “collision” for similar items. Locality-sensitive hashing has much in common with data clustering and nearest neighbor search.
Contents
- Definition
- Amplification
- Applications
- Bit sampling for Hamming distance
- Min wise independent permutations
- Nilsimsa Hash
- TLSH
- Random projection
- Stable distributions
- LSH algorithm for nearest neighbor search
- References
Hashing-based approximate nearest neighbor search algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as Locality-preserving hashing (LPH).
Definition
An LSH family
A family is interesting when
Alternatively it is defined with respect to a universe of items U that have a similarity function
Amplification
Given a
To create an AND-construction, we define a new family
To create an OR-construction, we define a new family
Applications
LSH has been applied to several problem domains including
Bit sampling for Hamming distance
One of the easiest ways to construct an LSH family is by bit sampling. This approach works for the Hamming distance over d-dimensional vectors
Min-wise independent permutations
Suppose U is composed of subsets of some ground set of enumerable items S and the similarity function of interest is the Jaccard index J. If π is a permutation on the indices of S, for
Define the function family H to be the set of all such functions and let D be the uniform distribution. Given two sets
Because the symmetric group on n elements has size n!, choosing a truly random permutation from the full symmetric group is infeasible for even moderately sized n. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" - a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosen π. It has been established that a min-wise independent family of permutations is at least of size
Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families. Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most k. Approximate min-wise independence differs from the property by at most a fixed ε.
Nilsimsa Hash
Nilsimsa is an anti-spam focused locality-sensitive hashing algorithm. The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements:
- The digest identifying each message should not vary significantly for changes that can be produced automatically.
- The encoding must be robust against intentional attacks.
- The encoding should support an extremely low risk of false positives.
TLSH
TLSH is locality-sensitive hashing algorithm designed for a range of security and digital forensic applications. The goal of TLSH is to generate a hash digest of document such that if two digests have a low distance between them, then it is likely that the messages are similar to each other.
Testing performed in the paper demonstrates that on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash.
An implementation of TLSH is available as open-source software.
Random projection
The random projection method of LSH due to Moses Charikar called SimHash (also sometimes called arccos) is designed to approximate the cosine distance between vectors. The basic idea of this technique is to choose a random hyperplane (defined by a normal unit vector r) at the outset and use the hyperplane to hash input vectors.
Given an input vector v and a hyperplane defined by r, we let
Each possible choice of r defines a single function. Let H be the set of all such functions and let D be the uniform distribution once again. It is not difficult to prove that, for two vectors
In this instance hashing produces only a single bit. Two vectors' bits match with probability proportional to the cosine of the angle between them.
Stable distributions
The hash function
Other construction methods for hash functions have been proposed to better fit the data. In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.
LSH algorithm for nearest neighbor search
One of the main applications of LSH is to provide a method for efficient approximate nearest neighbor search algorithms. Consider an LSH family
In the first step, we define a new family
In the preprocessing step we hash all n points from the data set S into each of the L hash tables. Given that the resulting hash tables have only n non-zero entries, one can reduce the amount of memory used per each hash table to
Given a query point q, the algorithm iterates over the L hash functions g. For each g considered, it retrieves the data points that are hashed into the same bucket as q. The process is stopped as soon as a point within distance
Given the parameters k and L, the algorithm has the following performance guarantees:
For a fixed approximation ratio