Puneet Varma (Editor)

Lattice problem

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Lattice problem

In computer science, lattice problems are a class of optimization problems on lattices. The conjectured intractability of such problems is central to construction of secure lattice-based cryptosystems. For applications in such cryptosystems, lattices over vector spaces (often Q n ) or free modules (often Z n ) are generally considered.

Contents

For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space V and a norm N. The norms usually considered are L2. However, other norms (such as Lp) are also considered and show up in a variety of results. Let λ ( L ) denote the length of the shortest non-zero vector in the lattice L, that is,

λ ( L ) = min v L { 0 } v N .

Shortest vector problem (SVP)

In SVP, a basis of a vector space V and a norm N (often L2) are given for a lattice L and one must find the shortest non-zero vector in V, as measured by N, in L. In other words, the algorithm should output a non-zero vector v such that N ( v ) = λ ( L ) .

In the γ -approximation version SVP γ , one must find a non-zero lattice vector of length at most γ λ ( L ) .

Known results

The exact version of the problem is only known to be NP-hard for randomized reductions.

By contrast, the equivalent problem with respect to the uniform norm is known to be NP-hard

Approach techniques: Lenstra–Lenstra–Lovász lattice basis reduction algorithm produces a "relatively short vector" in polynomial time, but does not solve the problem. Kannan's HKZ basis reduction algorithm solves the problem in n n 2 e + o ( n ) time where n is the dimension. Lastly, Schnorr presented a technique that interpolates between LLL and HKZ called Block Reduction. Block reduction works with HKZ bases and if the number of blocks is chosen to be larger than the dimension, the resulting algorithm is Kannan's full HKZ basis reduction.

GapSVP

The problem GapSVP β consists of differentiating between the instances of SVP in which the answer is at most 1 or larger than β , where β can be a fixed function of n , the number of vectors. Given a basis for the lattice, the algorithm must decide whether λ ( L ) 1 or λ ( L ) > β . Like other promise problems, the algorithm is allowed to err on all other cases.

Yet another version of the problem is GapSVP ζ , γ for some functions ζ , γ . The input to the algorithm is a basis B and a number d . It is assured that all the vectors in the Gram–Schmidt orthogonalization are of length at least 1, and that λ ( L ( B ) ) ζ ( n ) and that 1 d ζ ( n ) / γ ( n ) where n is the dimension. The algorithm must accept if λ ( L ( B ) ) d , and reject if λ ( L ( B ) ) γ ( n ) . d . For large ζ ( ζ ( n ) > 2 n / 2 ), the problem is equivalent to GapSVP γ because a preprocessing done using the LLL algorithm makes the second condition (and hence, ζ ) redundant.

Closest vector problem (CVP)

In CVP, a basis of a vector space V and a metric M (often L2) are given for a lattice L, as well as a vector v in V but not necessarily in L. It is desired to find the vector in L closest to v (as measured by M). In the γ -approximation version CVP γ , one must find a lattice vector at distance at most γ .

Relationship with SVP

The closest vector problem is a generalization of the shortest vector problem. It is easy to show that given an oracle for CVP γ (defined below), one can solve SVP γ by making some queries to the oracle. The naive method to find the shortest vector by calling the CVP γ oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.

The reduction from SVP γ to CVP γ is as follows: Suppose that the input to the SVP γ problem is the basis for lattice B = [ b 1 , b 2 , , b n ] . Consider the basis B i = [ b 1 , , 2 b i , , b n ] and let x i be the vector returned by CVP γ ( B i , b i ) . The claim is that the shortest vector in the set { x i b i } is the shortest vector in the given lattice.

Known results

Goldreich et al. showed that any hardness of SVP implies the same hardness for CVP. Using PCP tools, Arora et al. showed that CVP is hard to approximate within factor 2 log 1 ϵ ( n ) unless NP DTIME ( 2 p o l y ( log n ) ) . Dinur et al. strengthened this by giving a NP-hardness result with ϵ = ( log log n ) c for c < 1 / 2 .

Sphere decoding

The algorithm for CVP, especially the Fincke and Pohst variant, have been used for data detection in multiple-input multiple-output (MIMO) wireless communication systems (for coded and uncoded signals). In this context it is called sphere decoding due to the radius used internal to many CVP solutions.

It has been applied in the field of the integer ambiguity resolution of carrier-phase GNSS (GPS). It is called LAMBDA method in that field.

GapCVP

This problem is similar to the GapSVP problem. For GapCVP β , the input consists of a lattice basis and a vector v and the algorithm must answer whether

  • there is a lattice vector such that the distance between it and v is at most 1.
  • every lattice vector is at a distance greater than β away from v .
  • Known results

    The problem is trivially contained in NP for any approximation factor.

    Schnorr, in 1987, showed that deterministic polynomial time algorithms can solve the problem for β = 2 O ( n ( log log n ) 2 / log n ) . Ajtai et al. showed that probabilistic algorithms can achieve a slightly better approximation factor of β = 2 O ( n log log n / log n ) .

    In 1993, Banaszczyk showed that G a p C V P n is in N P c o N P . In 2000, Goldreich and Goldwasser showed that β = n / log n puts the problem in both NP and coAM. In 2005, Aharonov and Regev showed that for some constant c , the problem with β = c n is in N P c o N P .

    For lower bounds, Dinur et al. showed in 1998 that the problem is NP-hard for β = n o ( 1 / log log n ) .

    Shortest independent vectors problem (SIVP)

    Given a lattice L of dimension n, the algorithm must output n linearly independent v 1 , v 2 , , v n so that max v i < max B b i where the right hand side considers all basis B = { b 1 , , b n } of the lattice.

    In the γ -approximate version, given a lattice L with dimension n, find n linearly independent vectors v 1 , v 2 , , v n of length max || v i || ≤ γ λ n ( L ) , where λ n ( L ) is the n 'th successive minimum of L .

    Bounded distance decoding

    This problem is similar to CVP. Given a vector such that its distance from the lattice is at most λ ( L ) / 2 , the algorithm must output the closest lattice vector to it.

    Covering radius problem

    Given a basis for the lattice, the algorithm must find the largest distance (or in some versions, its approximation) from any vector to the lattice.

    Shortest basis problem

    Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem (SBP) must, given a lattice basis B , output an equivalent basis B such that the length of the longest vector in B is as short as possible.

    The approximation version S B P γ problem consist of finding a basis whose longest vector is at most γ times longer than the longest vector in the shortest basis.

    Use in cryptography

    Average case hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers.

    The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly orthogonal vectors. The Lenstra–Lenstra–Lovász lattice basis reduction algorithm (LLL) was an early efficient algorithm for this problem which could output an almost reduced lattice basis in polynomial time. This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice. However, this belief was challenged when in the late 1990s, several new results on the hardness of lattice problems were obtained, starting with the result of Ajtai.

    In his seminal papers, Ajtai showed that the SVP problem was NP-hard and discovered some connections between the worst-case complexity and average-case complexity of some lattice problems. Building on these results, Ajtai and Dwork created a public-key cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP, thus making it the first result to have used worst-case hardness to create secure systems.

    References

    Lattice problem Wikipedia