fbpx
Wikipedia

List decoding

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding.

The unique decoding model in coding theory, which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors. This resulted in a gap between the error-correction performance for stochastic noise models (proposed by Shannon) and the adversarial noise model (considered by Richard Hamming). Since the mid 90s, significant algorithmic progress by the coding theory community has bridged this gap. Much of this progress is based on a relaxed error-correction model called list decoding, wherein the decoder outputs a list of codewords for worst-case pathological error patterns where the actual transmitted codeword is included in the output list. In case of typical error patterns though, the decoder outputs a unique single codeword, given a received word, which is almost always the case (However, this is not known to be true for all codes). The improvement here is significant in that the error-correction performance doubles. This is because now the decoder is not confined by the half-the-minimum distance barrier. This model is very appealing because having a list of codewords is certainly better than just giving up. The notion of list-decoding has many interesting applications in complexity theory.

The way the channel noise is modeled plays a crucial role in that it governs the rate at which reliable communication is possible. There are two main schools of thought in modeling the channel behavior:

  • Probabilistic noise model studied by Shannon in which the channel noise is modeled precisely in the sense that the probabilistic behavior of the channel is well known and the probability of occurrence of too many or too few errors is low
  • Worst-case or adversarial noise model considered by Hamming in which the channel acts as an adversary that arbitrarily corrupts the codeword subject to a bound on the total number of errors.

The highlight of list-decoding is that even under adversarial noise conditions, it is possible to achieve the information-theoretic optimal trade-off between rate and fraction of errors that can be corrected. Hence, in a sense this is like improving the error-correction performance to that possible in case of a weaker, stochastic noise model.

Mathematical formulation edit

Let   be a   error-correcting code; in other words,   is a code of length  , dimension   and minimum distance   over an alphabet   of size  . The list-decoding problem can now be formulated as follows:

Input: Received word  , error bound  

Output: A list of all codewords   whose hamming distance from   is at most  .

Motivation for list decoding edit

Given a received word  , which is a noisy version of some transmitted codeword  , the decoder tries to output the transmitted codeword by placing its bet on a codeword that is “nearest” to the received word. The Hamming distance between two codewords is used as a metric in finding the nearest codeword, given the received word by the decoder. If   is the minimum Hamming distance of a code  , then there exists two codewords   and   that differ in exactly   positions. Now, in the case where the received word   is equidistant from the codewords   and  , unambiguous decoding becomes impossible as the decoder cannot decide which one of   and   to output as the original transmitted codeword. As a result, the half-the minimum distance acts as a combinatorial barrier beyond which unambiguous error-correction is impossible, if we only insist on unique decoding. However, received words such as   considered above occur only in the worst-case and if one looks at the way Hamming balls are packed in high-dimensional space, even for error patterns   beyond half-the minimum distance, there is only a single codeword   within Hamming distance   from the received word. This claim has been shown to hold with high probability for a random code picked from a natural ensemble and more so for the case of Reed–Solomon codes which is well studied and quite ubiquitous in the real world applications. In fact, Shannon's proof of the capacity theorem for q-ary symmetric channels can be viewed in light of the above claim for random codes.

Under the mandate of list-decoding, for worst-case errors, the decoder is allowed to output a small list of codewords. With some context specific or side information, it may be possible to prune the list and recover the original transmitted codeword. Hence, in general, this seems to be a stronger error-recovery model than unique decoding.

List-decoding potential edit

For a polynomial-time list-decoding algorithm to exist, we need the combinatorial guarantee that any Hamming ball of radius   around a received word   (where   is the fraction of errors in terms of the block length  ) has a small number of codewords. This is because the list size itself is clearly a lower bound on the running time of the algorithm. Hence, we require the list size to be a polynomial in the block length   of the code. A combinatorial consequence of this requirement is that it imposes an upper bound on the rate of a code. List decoding promises to meet this upper bound. It has been shown non-constructively that codes of rate   exist that can be list decoded up to a fraction of errors approaching  . The quantity   is referred to in the literature as the list-decoding capacity. This is a substantial gain compared to the unique decoding model as we now have the potential to correct twice as many errors. Naturally, we need to have at least a fraction   of the transmitted symbols to be correct in order to recover the message. This is an information-theoretic lower bound on the number of correct symbols required to perform decoding and with list decoding, we can potentially achieve this information-theoretic limit. However, to realize this potential, we need explicit codes (codes that can be constructed in polynomial time) and efficient algorithms to perform encoding and decoding.

(p, L)-list-decodability edit

For any error fraction   and an integer  , a code   is said to be list decodable up to a fraction   of errors with list size at most   or  -list-decodable if for every  , the number of codewords   within Hamming distance   from   is at most  

Combinatorics of list decoding edit

The relation between list decodability of a code and other fundamental parameters such as minimum distance and rate have been fairly well studied. It has been shown that every code can be list decoded using small lists beyond half the minimum distance up to a bound called the Johnson radius. This is quite significant because it proves the existence of  -list-decodable codes of good rate with a list-decoding radius much larger than   In other words, the Johnson bound rules out the possibility of having a large number of codewords in a Hamming ball of radius slightly greater than   which means that it is possible to correct far more errors with list decoding.

List-decoding capacity edit

Theorem (List-Decoding Capacity). Let   and   The following two statements hold for large enough block length  .
i) If  , then there exists a  -list decodable code.
ii) If  , then every  -list-decodable code has  .
Where
 
is the  -ary entropy function defined for   and extended by continuity to  

What this means is that for rates approaching the channel capacity, there exists list decodable codes with polynomial sized lists enabling efficient decoding algorithms whereas for rates exceeding the channel capacity, the list size becomes exponential which rules out the existence of efficient decoding algorithms.

The proof for list-decoding capacity is a significant one in that it exactly matches the capacity of a  -ary symmetric channel  . In fact, the term "list-decoding capacity" should actually be read as the capacity of an adversarial channel under list decoding. Also, the proof for list-decoding capacity is an important result that pin points the optimal trade-off between rate of a code and the fraction of errors that can be corrected under list decoding.

Sketch of proof edit

The idea behind the proof is similar to that of Shannon's proof for capacity of the binary symmetric channel   where a random code is picked and showing that it is  -list-decodable with high probability as long as the rate   For rates exceeding the above quantity, it can be shown that the list size   becomes super-polynomially large.

A "bad" event is defined as one in which, given a received word   and   messages   it so happens that  , for every   where   is the fraction of errors that we wish to correct and   is the Hamming ball of radius   with the received word   as the center.

Now, the probability that a codeword   associated with a fixed message   lies in a Hamming ball   is given by

 

where the quantity   is the volume of a Hamming ball of radius   with the received word   as the center. The inequality in the above relation follows from the upper bound on the volume of a Hamming ball. The quantity   gives a very good estimate on the volume of a Hamming ball of radius   centered on any word in   Put another way, the volume of a Hamming ball is translation invariant. To continue with the proof sketch, we conjure the union bound in probability theory which tells us that the probability of a bad event happening for a given   is upper bounded by the quantity  .

With the above in mind, the probability of "any" bad event happening can be shown to be less than  . To show this, we work our way over all possible received words   and every possible subset of   messages in  

Now turning to the proof of part (ii), we need to show that there are super-polynomially many codewords around every   when the rate exceeds the list-decoding capacity. We need to show that   is super-polynomially large if the rate  . Fix a codeword  . Now, for every   picked at random, we have

 

since Hamming balls are translation invariant. From the definition of the volume of a Hamming ball and the fact that   is chosen uniformly at random from   we also have

 

Let us now define an indicator variable   such that

 

Taking the expectation of the volume of a Hamming ball we have

 

Therefore, by the probabilistic method, we have shown that if the rate exceeds the list-decoding capacity, then the list size becomes super-polynomially large. This completes the proof sketch for the list-decoding capacity.

List decodability of Reed-Solomon Codes edit

In 2023, building upon three seminal works,[1][2][3] coding theorists showed that, with high probability, Reed-Solomon codes defined over random evaluation points are list decodable up to the list-decoding capacity over linear size alphabets.

List-decoding algorithms edit

In the period from 1995 to 2007, the coding theory community developed progressively more efficient list-decoding algorithms. Algorithms for Reed–Solomon codes that can decode up to the Johnson radius which is   exist where   is the normalised distance or relative distance. However, for Reed-Solomon codes,   which means a fraction   of errors can be corrected. Some of the most prominent list-decoding algorithms are the following:

  • Sudan '95 – The first known non-trivial list-decoding algorithm for Reed–Solomon codes that achieved efficient list decoding up to   errors developed by Madhu Sudan.
  • Guruswami–Sudan '98 – An improvement on the above algorithm for list decoding Reed–Solomon codes up to   errors by Madhu Sudan and his then doctoral student Venkatesan Guruswami.
  • Parvaresh–Vardy '05 – In a breakthrough paper, Farzad Parvaresh and Alexander Vardy presented codes that can be list decoded beyond the   radius for low rates  . Their codes are variants of Reed-Solomon codes which are obtained by evaluating   correlated polynomials instead of just   as in the case of usual Reed-Solomon codes.
  • Guruswami–Rudra '06 - In yet another breakthrough, Venkatesan Guruswami and Atri Rudra give explicit codes that achieve list-decoding capacity, that is, they can be list decoded up to the radius   for any  . In other words, this is error-correction with optimal redundancy. This answered a question that had been open for about 50 years. This work has been invited to the Research Highlights section of the Communications of the ACM (which is “devoted to the most important research results published in Computer Science in recent years”) and was mentioned in an article titled “Coding and Computing Join Forces” in the Sep 21, 2007 issue of the Science magazine. The codes that they are given are called folded Reed-Solomon codes which are nothing but plain Reed-Solomon codes but viewed as a code over a larger alphabet by careful bundling of codeword symbols.

Because of their ubiquity and the nice algebraic properties they possess, list-decoding algorithms for Reed–Solomon codes were a main focus of researchers. The list-decoding problem for Reed–Solomon codes can be formulated as follows:

Input: For an   Reed-Solomon code, we are given the pair   for  , where   is the  th bit of the received word and the  's are distinct points in the finite field   and an error parameter  .

Output: The goal is to find all the polynomials   of degree at most   which is the message length such that   for at least   values of  . Here, we would like to have   as small as possible so that greater number of errors can be tolerated.

With the above formulation, the general structure of list-decoding algorithms for Reed-Solomon codes is as follows:

Step 1: (Interpolation) Find a non-zero bivariate polynomial   such that   for  .

Step 2: (Root finding/Factorization) Output all degree   polynomials   such that   is a factor of   i.e.  . For each of these polynomials, check if   for at least   values of  . If so, include such a polynomial   in the output list.

Given the fact that bivariate polynomials can be factored efficiently, the above algorithm runs in polynomial time.

Applications in complexity theory and cryptography edit

Algorithms developed for list decoding of several interesting code families have found interesting applications in computational complexity and the field of cryptography. Following is a sample list of applications outside of coding theory:

References edit

  1. ^ Brakensiek, Joshua; Gopi, Sivakanth; Makam, Visu (2023-06-02). "Generic Reed-Solomon Codes Achieve List-Decoding Capacity". Proceedings of the 55th Annual ACM Symposium on Theory of Computing. STOC 2023. New York, NY, USA: Association for Computing Machinery. pp. 1488–1501. arXiv:2206.05256. doi:10.1145/3564246.3585128. ISBN 978-1-4503-9913-5.
  2. ^ Guo, Zeyu; Zhang, Zihan (2023-11-06). "Randomly Punctured Reed-Solomon Codes Achieve the List Decoding Capacity over Polynomial-Size Alphabets". 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS). FOCS 2023, Santa Cruz, CA, USA. IEEE. pp. 164–176. arXiv:2304.01403. doi:10.1109/FOCS57990.2023.00019. ISBN 979-8-3503-1894-4.
  3. ^ Alrabiah, Omar; Guruswami, Venkatesan; Li, Ray (2023). "Randomly punctured Reed--Solomon codes achieve list-decoding capacity over linear-sized fields". arXiv:2304.09445 [cs.IT].

External links edit

  • A Survey on list decoding by Madhu Sudan
  • Notes from a course taught by Madhu Sudan
  • Notes from a course taught by Luca Trevisan
  • Notes from a course taught by Venkatesan Guruswami
  • taught by Atri Rudra
  • P. Elias, "List decoding for noisy channels," Technical Report 335, Research Laboratory of Electronics, MIT, 1957.
  • P. Elias, "Error-correcting codes for list decoding," IEEE Transactions on Information Theory, vol. 37, pp. 5–12, 1991.
  • J. M. Wozencraft, "List decoding," Quarterly Progress Report, Research Laboratory of Electronics, MIT, vol. 48, pp. 90–95, 1958.
  • Venkatesan Guruswami's PhD thesis
  • Algorithmic Results in List Decoding
  • Folded Reed–Solomon code

list, decoding, this, article, includes, list, references, related, reading, external, links, sources, remain, unclear, because, lacks, inline, citations, please, help, improve, this, article, introducing, more, precise, citations, 2011, learn, when, remove, t. This article includes a list of references related reading or external links but its sources remain unclear because it lacks inline citations Please help improve this article by introducing more precise citations May 2011 Learn how and when to remove this message In coding theory list decoding is an alternative to unique decoding of error correcting codes for large error rates The notion was proposed by Elias in the 1950s The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct This allows for handling a greater number of errors than that allowed by unique decoding The unique decoding model in coding theory which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors This resulted in a gap between the error correction performance for stochastic noise models proposed by Shannon and the adversarial noise model considered by Richard Hamming Since the mid 90s significant algorithmic progress by the coding theory community has bridged this gap Much of this progress is based on a relaxed error correction model called list decoding wherein the decoder outputs a list of codewords for worst case pathological error patterns where the actual transmitted codeword is included in the output list In case of typical error patterns though the decoder outputs a unique single codeword given a received word which is almost always the case However this is not known to be true for all codes The improvement here is significant in that the error correction performance doubles This is because now the decoder is not confined by the half the minimum distance barrier This model is very appealing because having a list of codewords is certainly better than just giving up The notion of list decoding has many interesting applications in complexity theory The way the channel noise is modeled plays a crucial role in that it governs the rate at which reliable communication is possible There are two main schools of thought in modeling the channel behavior Probabilistic noise model studied by Shannon in which the channel noise is modeled precisely in the sense that the probabilistic behavior of the channel is well known and the probability of occurrence of too many or too few errors is low Worst case or adversarial noise model considered by Hamming in which the channel acts as an adversary that arbitrarily corrupts the codeword subject to a bound on the total number of errors The highlight of list decoding is that even under adversarial noise conditions it is possible to achieve the information theoretic optimal trade off between rate and fraction of errors that can be corrected Hence in a sense this is like improving the error correction performance to that possible in case of a weaker stochastic noise model Contents 1 Mathematical formulation 2 Motivation for list decoding 3 List decoding potential 4 p L list decodability 5 Combinatorics of list decoding 6 List decoding capacity 6 1 Sketch of proof 7 List decodability of Reed Solomon Codes 8 List decoding algorithms 9 Applications in complexity theory and cryptography 10 References 11 External linksMathematical formulation editLet C displaystyle mathcal C nbsp be a n k d q displaystyle n k d q nbsp error correcting code in other words C displaystyle mathcal C nbsp is a code of length n displaystyle n nbsp dimension k displaystyle k nbsp and minimum distance d displaystyle d nbsp over an alphabet S displaystyle Sigma nbsp of size q displaystyle q nbsp The list decoding problem can now be formulated as follows Input Received word x S n displaystyle x in Sigma n nbsp error bound e displaystyle e nbsp Output A list of all codewords x 1 x 2 x m C displaystyle x 1 x 2 ldots x m in mathcal C nbsp whose hamming distance from x displaystyle x nbsp is at most e displaystyle e nbsp Motivation for list decoding editGiven a received word y displaystyle y nbsp which is a noisy version of some transmitted codeword c displaystyle c nbsp the decoder tries to output the transmitted codeword by placing its bet on a codeword that is nearest to the received word The Hamming distance between two codewords is used as a metric in finding the nearest codeword given the received word by the decoder If d displaystyle d nbsp is the minimum Hamming distance of a code C displaystyle mathcal C nbsp then there exists two codewords c 1 displaystyle c 1 nbsp and c 2 displaystyle c 2 nbsp that differ in exactly d displaystyle d nbsp positions Now in the case where the received word y displaystyle y nbsp is equidistant from the codewords c 1 displaystyle c 1 nbsp and c 2 displaystyle c 2 nbsp unambiguous decoding becomes impossible as the decoder cannot decide which one of c 1 displaystyle c 1 nbsp and c 2 displaystyle c 2 nbsp to output as the original transmitted codeword As a result the half the minimum distance acts as a combinatorial barrier beyond which unambiguous error correction is impossible if we only insist on unique decoding However received words such as y displaystyle y nbsp considered above occur only in the worst case and if one looks at the way Hamming balls are packed in high dimensional space even for error patterns e displaystyle e nbsp beyond half the minimum distance there is only a single codeword c displaystyle c nbsp within Hamming distance e displaystyle e nbsp from the received word This claim has been shown to hold with high probability for a random code picked from a natural ensemble and more so for the case of Reed Solomon codes which is well studied and quite ubiquitous in the real world applications In fact Shannon s proof of the capacity theorem for q ary symmetric channels can be viewed in light of the above claim for random codes Under the mandate of list decoding for worst case errors the decoder is allowed to output a small list of codewords With some context specific or side information it may be possible to prune the list and recover the original transmitted codeword Hence in general this seems to be a stronger error recovery model than unique decoding List decoding potential editFor a polynomial time list decoding algorithm to exist we need the combinatorial guarantee that any Hamming ball of radius p n displaystyle pn nbsp around a received word r displaystyle r nbsp where p displaystyle p nbsp is the fraction of errors in terms of the block length n displaystyle n nbsp has a small number of codewords This is because the list size itself is clearly a lower bound on the running time of the algorithm Hence we require the list size to be a polynomial in the block length n displaystyle n nbsp of the code A combinatorial consequence of this requirement is that it imposes an upper bound on the rate of a code List decoding promises to meet this upper bound It has been shown non constructively that codes of rate R displaystyle R nbsp exist that can be list decoded up to a fraction of errors approaching 1 R displaystyle 1 R nbsp The quantity 1 R displaystyle 1 R nbsp is referred to in the literature as the list decoding capacity This is a substantial gain compared to the unique decoding model as we now have the potential to correct twice as many errors Naturally we need to have at least a fraction R displaystyle R nbsp of the transmitted symbols to be correct in order to recover the message This is an information theoretic lower bound on the number of correct symbols required to perform decoding and with list decoding we can potentially achieve this information theoretic limit However to realize this potential we need explicit codes codes that can be constructed in polynomial time and efficient algorithms to perform encoding and decoding p L list decodability editFor any error fraction 0 p 1 displaystyle 0 leqslant p leqslant 1 nbsp and an integer L 1 displaystyle L geqslant 1 nbsp a code C S n displaystyle mathcal C subseteq Sigma n nbsp is said to be list decodable up to a fraction p displaystyle p nbsp of errors with list size at most L displaystyle L nbsp or p L displaystyle p L nbsp list decodable if for every y S n displaystyle y in Sigma n nbsp the number of codewords c C displaystyle c in C nbsp within Hamming distance p n displaystyle pn nbsp from y displaystyle y nbsp is at most L displaystyle L nbsp Combinatorics of list decoding editThe relation between list decodability of a code and other fundamental parameters such as minimum distance and rate have been fairly well studied It has been shown that every code can be list decoded using small lists beyond half the minimum distance up to a bound called the Johnson radius This is quite significant because it proves the existence of p L displaystyle p L nbsp list decodable codes of good rate with a list decoding radius much larger than d 2 displaystyle tfrac d 2 nbsp In other words the Johnson bound rules out the possibility of having a large number of codewords in a Hamming ball of radius slightly greater than d 2 displaystyle tfrac d 2 nbsp which means that it is possible to correct far more errors with list decoding List decoding capacity editTheorem List Decoding Capacity Let q 2 0 p 1 1 q displaystyle q geqslant 2 0 leqslant p leqslant 1 tfrac 1 q nbsp and ϵ 0 displaystyle epsilon geqslant 0 nbsp The following two statements hold for large enough block length n displaystyle n nbsp i If R 1 H q p ϵ displaystyle R leqslant 1 H q p epsilon nbsp then there exists a p O 1 ϵ displaystyle p O 1 epsilon nbsp list decodable code ii If R 1 H q p ϵ displaystyle R geqslant 1 H q p epsilon nbsp then every p L displaystyle p L nbsp list decodable code has L q W n displaystyle L q Omega n nbsp dd WhereH q p p log q q 1 p log q p 1 p log q 1 p displaystyle H q p p log q q 1 p log q p 1 p log q 1 p nbsp dd is the q displaystyle q nbsp ary entropy function defined for p 0 1 displaystyle p in 0 1 nbsp and extended by continuity to 0 1 displaystyle 0 1 nbsp What this means is that for rates approaching the channel capacity there exists list decodable codes with polynomial sized lists enabling efficient decoding algorithms whereas for rates exceeding the channel capacity the list size becomes exponential which rules out the existence of efficient decoding algorithms The proof for list decoding capacity is a significant one in that it exactly matches the capacity of a q displaystyle q nbsp ary symmetric channel q S C p displaystyle qSC p nbsp In fact the term list decoding capacity should actually be read as the capacity of an adversarial channel under list decoding Also the proof for list decoding capacity is an important result that pin points the optimal trade off between rate of a code and the fraction of errors that can be corrected under list decoding Sketch of proof edit The idea behind the proof is similar to that of Shannon s proof for capacity of the binary symmetric channel B S C p displaystyle BSC p nbsp where a random code is picked and showing that it is p L displaystyle p L nbsp list decodable with high probability as long as the rate R 1 H q p 1 L displaystyle R leqslant 1 H q p tfrac 1 L nbsp For rates exceeding the above quantity it can be shown that the list size L displaystyle L nbsp becomes super polynomially large A bad event is defined as one in which given a received word y q n displaystyle y in q n nbsp and L 1 displaystyle L 1 nbsp messages m 0 m L q k displaystyle m 0 ldots m L in q k nbsp it so happens that C m i B y p n displaystyle mathcal C m i in B y pn nbsp for every 0 i L displaystyle 0 leqslant i leqslant L nbsp where p displaystyle p nbsp is the fraction of errors that we wish to correct and B y p n displaystyle B y pn nbsp is the Hamming ball of radius p n displaystyle pn nbsp with the received word y displaystyle y nbsp as the center Now the probability that a codeword C m i displaystyle mathcal C m i nbsp associated with a fixed message m i q k displaystyle m i in q k nbsp lies in a Hamming ball B y p n displaystyle B y pn nbsp is given by Pr C m i B y p n V o l q y p n q n q n 1 H q p displaystyle Pr left C m i in B y pn right frac mathrm Vol q y pn q n leqslant q n 1 H q p nbsp where the quantity V o l q y p n displaystyle Vol q y pn nbsp is the volume of a Hamming ball of radius p n displaystyle pn nbsp with the received word y displaystyle y nbsp as the center The inequality in the above relation follows from the upper bound on the volume of a Hamming ball The quantity q H q p displaystyle q H q p nbsp gives a very good estimate on the volume of a Hamming ball of radius p displaystyle p nbsp centered on any word in q n displaystyle q n nbsp Put another way the volume of a Hamming ball is translation invariant To continue with the proof sketch we conjure the union bound in probability theory which tells us that the probability of a bad event happening for a given y m 0 m L displaystyle y m 0 dots m L nbsp is upper bounded by the quantity q n L 1 1 H q p displaystyle q n L 1 1 H q p nbsp With the above in mind the probability of any bad event happening can be shown to be less than 1 displaystyle 1 nbsp To show this we work our way over all possible received words y q n displaystyle y in q n nbsp and every possible subset of L displaystyle L nbsp messages in q k displaystyle q k nbsp Now turning to the proof of part ii we need to show that there are super polynomially many codewords around every y q n displaystyle y in q n nbsp when the rate exceeds the list decoding capacity We need to show that C B y p n displaystyle mathcal C cap B y pn nbsp is super polynomially large if the rate R 1 H q p ϵ displaystyle R geqslant 1 H q p epsilon nbsp Fix a codeword c C displaystyle c in mathcal C nbsp Now for every y q n displaystyle y in q n nbsp picked at random we have Pr c B y p n Pr y B c p n displaystyle Pr c in B y pn Pr y in B c pn nbsp since Hamming balls are translation invariant From the definition of the volume of a Hamming ball and the fact that y displaystyle y nbsp is chosen uniformly at random from q n displaystyle q n nbsp we also have Pr c B y p n Pr y B c p n V o l y p n q n q n 1 H q p o n displaystyle Pr c in B y pn Pr y in B c pn frac mathrm Vol y pn q n geqslant q n 1 H q p o n nbsp Let us now define an indicator variable X c displaystyle X c nbsp such that X c 1 c B y p n 0 otherwise displaystyle X c begin cases 1 amp c in B y pn 0 amp text otherwise end cases nbsp Taking the expectation of the volume of a Hamming ball we have E B y p n c C E X c c C Pr X c 1 q n 1 H q p o n q n R 1 H q p o 1 q W n displaystyle begin aligned E B y pn amp sum c in mathcal C E X c 4pt amp sum c in mathcal C Pr X c 1 4pt amp geqslant sum q n 1 H q p o n 4pt amp sum q n R 1 H q p o 1 4pt amp geqslant q Omega n end aligned nbsp Therefore by the probabilistic method we have shown that if the rate exceeds the list decoding capacity then the list size becomes super polynomially large This completes the proof sketch for the list decoding capacity List decodability of Reed Solomon Codes editIn 2023 building upon three seminal works 1 2 3 coding theorists showed that with high probability Reed Solomon codes defined over random evaluation points are list decodable up to the list decoding capacity over linear size alphabets List decoding algorithms editIn the period from 1995 to 2007 the coding theory community developed progressively more efficient list decoding algorithms Algorithms for Reed Solomon codes that can decode up to the Johnson radius which is 1 1 d displaystyle 1 sqrt 1 delta nbsp exist where d displaystyle delta nbsp is the normalised distance or relative distance However for Reed Solomon codes d 1 R displaystyle delta 1 R nbsp which means a fraction 1 R displaystyle 1 sqrt R nbsp of errors can be corrected Some of the most prominent list decoding algorithms are the following Sudan 95 The first known non trivial list decoding algorithm for Reed Solomon codes that achieved efficient list decoding up to 1 2 R displaystyle 1 sqrt 2R nbsp errors developed by Madhu Sudan Guruswami Sudan 98 An improvement on the above algorithm for list decoding Reed Solomon codes up to 1 R displaystyle 1 sqrt R nbsp errors by Madhu Sudan and his then doctoral student Venkatesan Guruswami Parvaresh Vardy 05 In a breakthrough paper Farzad Parvaresh and Alexander Vardy presented codes that can be list decoded beyond the 1 R displaystyle 1 sqrt R nbsp radius for low rates R displaystyle R nbsp Their codes are variants of Reed Solomon codes which are obtained by evaluating m 1 displaystyle m geqslant 1 nbsp correlated polynomials instead of just 1 displaystyle 1 nbsp as in the case of usual Reed Solomon codes Guruswami Rudra 06 In yet another breakthrough Venkatesan Guruswami and Atri Rudra give explicit codes that achieve list decoding capacity that is they can be list decoded up to the radius 1 R ϵ displaystyle 1 R epsilon nbsp for any ϵ gt 0 displaystyle epsilon gt 0 nbsp In other words this is error correction with optimal redundancy This answered a question that had been open for about 50 years This work has been invited to the Research Highlights section of the Communications of the ACM which is devoted to the most important research results published in Computer Science in recent years and was mentioned in an article titled Coding and Computing Join Forces in the Sep 21 2007 issue of the Science magazine The codes that they are given are called folded Reed Solomon codes which are nothing but plain Reed Solomon codes but viewed as a code over a larger alphabet by careful bundling of codeword symbols Because of their ubiquity and the nice algebraic properties they possess list decoding algorithms for Reed Solomon codes were a main focus of researchers The list decoding problem for Reed Solomon codes can be formulated as follows Input For an n k 1 q displaystyle n k 1 q nbsp Reed Solomon code we are given the pair a i y i displaystyle alpha i y i nbsp for 1 i n displaystyle 1 leq i leq n nbsp where y i displaystyle y i nbsp is the i displaystyle i nbsp th bit of the received word and the a i displaystyle alpha i nbsp s are distinct points in the finite field F q displaystyle F q nbsp and an error parameter e n t displaystyle e n t nbsp Output The goal is to find all the polynomials P X F q X displaystyle P X in F q X nbsp of degree at most k displaystyle k nbsp which is the message length such that p a i y i displaystyle p alpha i y i nbsp for at least t displaystyle t nbsp values of i displaystyle i nbsp Here we would like to have t displaystyle t nbsp as small as possible so that greater number of errors can be tolerated With the above formulation the general structure of list decoding algorithms for Reed Solomon codes is as follows Step 1 Interpolation Find a non zero bivariate polynomial Q X Y displaystyle Q X Y nbsp such that Q a i y i 0 displaystyle Q alpha i y i 0 nbsp for 1 i n displaystyle 1 leq i leq n nbsp Step 2 Root finding Factorization Output all degree k displaystyle k nbsp polynomials p X displaystyle p X nbsp such that Y p X displaystyle Y p X nbsp is a factor of Q X Y displaystyle Q X Y nbsp i e Q X p X 0 displaystyle Q X p X 0 nbsp For each of these polynomials check if p a i y i displaystyle p alpha i y i nbsp for at least t displaystyle t nbsp values of i n displaystyle i in n nbsp If so include such a polynomial p X displaystyle p X nbsp in the output list Given the fact that bivariate polynomials can be factored efficiently the above algorithm runs in polynomial time Applications in complexity theory and cryptography editAlgorithms developed for list decoding of several interesting code families have found interesting applications in computational complexity and the field of cryptography Following is a sample list of applications outside of coding theory Construction of hard core predicates from one way permutations Predicting witnesses for NP search problems Amplifying hardness of Boolean functions Average case hardness of permanent of random matrices Extractors and Pseudorandom generators Efficient traitor tracing References edit Brakensiek Joshua Gopi Sivakanth Makam Visu 2023 06 02 Generic Reed Solomon Codes Achieve List Decoding Capacity Proceedings of the 55th Annual ACM Symposium on Theory of Computing STOC 2023 New York NY USA Association for Computing Machinery pp 1488 1501 arXiv 2206 05256 doi 10 1145 3564246 3585128 ISBN 978 1 4503 9913 5 Guo Zeyu Zhang Zihan 2023 11 06 Randomly Punctured Reed Solomon Codes Achieve the List Decoding Capacity over Polynomial Size Alphabets 2023 IEEE 64th Annual Symposium on Foundations of Computer Science FOCS FOCS 2023 Santa Cruz CA USA IEEE pp 164 176 arXiv 2304 01403 doi 10 1109 FOCS57990 2023 00019 ISBN 979 8 3503 1894 4 Alrabiah Omar Guruswami Venkatesan Li Ray 2023 Randomly punctured Reed Solomon codes achieve list decoding capacity over linear sized fields arXiv 2304 09445 cs IT External links editA Survey on list decoding by Madhu Sudan Notes from a course taught by Madhu Sudan Notes from a course taught by Luca Trevisan Notes from a course taught by Venkatesan Guruswami Notes from a course taught by Atri Rudra P Elias List decoding for noisy channels Technical Report 335 Research Laboratory of Electronics MIT 1957 P Elias Error correcting codes for list decoding IEEE Transactions on Information Theory vol 37 pp 5 12 1991 J M Wozencraft List decoding Quarterly Progress Report Research Laboratory of Electronics MIT vol 48 pp 90 95 1958 Venkatesan Guruswami s PhD thesis Algorithmic Results in List Decoding Folded Reed Solomon code Retrieved from https en wikipedia org w index php title List decoding amp oldid 1219493275, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.