![]() | ||
In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping
Contents
- History
- Definition
- Graphic description
- Folded ReedSolomon codes and the singleton bound
- Why folding might help
- How folded ReedSolomon FRS codes and Parvaresh Vardy PV codes are related
- Brief overview of list decoding folded ReedSolomon codes
- Linear algebraic list decoding algorithm
- Step 1 The interpolation step
- Step 2 The root finding step
- Step 3 The prune step
- Summary
- References
Folded Reed–Solomon codes are also a special case of Parvaresh–Vardy codes.
Using optimal parameters one can decode with a rate of R, and achieve a decoding radius of 1 − R.
The term "folded Reed–Solomon codes" was coined in a paper by V.Y. Krachkovsky with an algorithm that presented Reed–Solomon codes with many random "phased burst" errors [1]. The list-decoding algorithm for folded RS codes corrects beyond the
History
One of the ongoing challenges in Coding Theory is to have error correcting codes achieve an optimal trade-off between (Coding) Rate and Error-Correction Radius. Though this may not be possible to achieve practically (due to Noisy Channel Coding Theory issues), quasi optimal tradeoffs can be achieved theoretically.
Prior to Folded Reed–Solomon codes being devised, the best Error-Correction Radius achieved was
An improvement upon this
For
Folded Reed–Solomon Codes improve on these previous constructions, and can be list decoded in polynomial time for a fraction
Definition
Consider a Reed–Solomon
Mapping for Reed–Solomon codes like this:
where
The
Graphic description
The above definition is made more clear by means of the diagram with
The message is denoted by
Then bundling is performed in groups of 3 elements, to give a codeword of length
Something to be observed here is that the folding operation demonstrated does not change the rate
To prove this, consider a linear
Folded Reed–Solomon codes and the singleton bound
According to the asymptotic version of the singleton bound, it is known that the relative distance
Why folding might help?
Folded Reed–Solomon codes are basically the same as Reed Solomon codes, just viewed over a larger alphabet. To show how this might help, consider a folded Reed–Solomon code with
Also, decoding a folded Reed–Solomon code is an easier task. Suppose we want to correct a third of errors. The decoding algorithm chosen must correct an error pattern that corrects every third symbol in the Reed–Solomon encoding. But after folding, this error pattern will corrupt all symbols over
How folded Reed–Solomon (FRS) codes and Parvaresh Vardy (PV) codes are related
We can relate Folded Reed Solomon codes with Parvaresh Vardy codes which encodes a polynomial
If we compare the folded RS code to a PV code of order 2 for the set of evaluation points
we can see that in PV encoding of
unlike in the folded FRS encoding in which it appears only once. Thus, the PV and folded RS codes have same information but only the rate of FRS is bigger by a factor of
Brief overview of list-decoding folded Reed–Solomon codes
A list decoding algorithm which runs in quadratic time to decode FRS code up to radius
after which all the polynomials
Linear-algebraic list decoding algorithm
Guruswami presents a
Step 1: The interpolation step
It is a Welch–Berlekamp-style interpolation (because it can be viewed as the higher-dimensional generalization of the Welch–Berlekamp algorithm). Suppose we received a codeword
We interpolate the nonzero polynomial
by using a carefully chosen degree parameter
So the interpolation requirements will be
Then the number of monomials in
Because the number of monomials in
This lemma shows us that the interpolation step can be done in near-linear time.
For now, we have talked about everything we need for the multivariate polynomial
Here "agree" means that all the
This lemma shows us that any such polynomial
Combining Lemma 2 and parameter
Further we can get the decoding bound
We notice that the fractional agreement is
Step 2: The root-finding step
During this step, our task focus on how to find all polynomials
Since the above equation forms a linear system equations over
the solutions to the above equation is an affine subspace of
It is natural to ask how large is the dimension of the solution? Is there any upper bound on the dimension? Having an upper bound is very important in constructing an efficient list decoding algorithm because one can simply output all the codewords for any given decoding problem.
Actually it indeed has an upper bound as below lemma argues.
Lemma 3. If the order ofThis lemma shows us the upper bound of the dimension for the solution space.
Finally, based on the above analysis, we have below theorem
Theorem 1. For the folded Reed–Solomon codeWhen
Theorem 1 tells us exactly how large the error radius would be.
Now we finally get the solution subspace. However, there is still one problem standing. The list size in the worst case is
Things get better if we change the code by carefully choosing a subset of all possible degree
Step 3: The prune step
By converting the problem of decoding a folded Reed–Solomon code into two linear systems, one linear system that is used for the interpolation step and another linear system to find the candidate solution subspace, the complexity of the decoding problem is successfully reduced to quadratic. However, in the worst case, the bound of list size of the output is pretty bad.
It was mentioned in Step 2 that if one carefully chooses only a subset of all possible degree
To achieve this goal, the idea is to limit the coefficient vector
This is to make sure that the rate will be at most reduced by factor of
The bound for the list size at worst case is
During this step, as it has to check each element of the solution subspace that we get from Step 2, it takes
Dvir and Lovett improved the result based on the work of Guruswami, which can reduce the list size to a constant.
Here is only presented the idea that is used to prune the solution subspace. For the details of the prune process, please refer to papers by Guruswami, Dvir and Lovett, which are listed in the reference.
Summary
If we don't consider the Step 3, this algorithm can run in quadratic time. A summary for this algorithm is listed below.