Supriya Ghosh (Editor)

Discrete Universal Denoiser

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In information theory and signal processing, the Discrete Universal Denoiser (DUDE) is a denoising scheme for recovering sequences over a finite alphabet, which have been corrupted by a discrete memoryless channel. The DUDE was proposed in 2005 by Tsachy Weissman, Erik Ordentlich, Gadiel Seroussi, Sergio Verdú and Marcelo J. Weinberger.

Contents

Overview

The Discrete Universal Denoiser (DUDE) is a denoising scheme that estimates an unknown signal x n = ( x 1 x n ) over a finite alphabet from a noisy version z n = ( z 1 z n ) . While most denoising schemes in the signal processing and statistics literature deal with signals over an infinite alphabet (notably, real-valued signals), the DUDE addresses the finite alphabet case. The noisy version z n is assumed to be generated by transmitting x n through a known discrete memoryless channel.

For a fixed context length parameter k , the DUDE counts of the occurrences of all the strings of length 2 k + 1 appearing in z n . The estimated value x ^ i is determined based the two-sided length- k context ( z i k , , z i 1 , z i + 1 , , z i + k ) of z i , taking into account all the other tokens in z n with the same context, as well as the known channel matrix and the loss function being used.

The idea underlying the DUDE is best illustrated when x n is a realization of a random vector X n . If the conditional distribution X i | Z i k , , Z i 1 , Z i + 1 , , Z i + k , namely the distribution of the noiseless symbol X i conditional on its noisy context ( Z i k , , Z i 1 , Z i + 1 , , Z i + k ) was available, the optimal estimator X ^ i would be the Bayes Response to X i | Z i k , , Z i 1 , Z i + 1 , , Z i + k . Fortunately, when the channel matrix is known and non-degenerate, this conditional distribution can be expressed in terms of the conditional distribution Z i | Z i k , , Z i 1 , Z i + 1 , , Z i + k , namely the distribution of the noisy symbol Z i conditional on its noisy context. This conditional distribution, in turn, can be estimated from an individual observed noisy signal Z n by virtue of the Law of Large Numbers, provided n is “large enough”.

Applying the DUDE scheme with a context length k to a sequence of length n over a finite alphabet Z requires O ( n ) operations and space O ( min ( n , | Z | 2 k ) ) .

Under certain assumptions, the DUDE is a universal scheme in the sense of asymptotically performing as well as an optimal denoiser, which has oracle access to the unknown sequence. More specifically, assume that the denoising performance is measured using a given single-character fidelity criterion, and consider the regime where the sequence length n tends to infinity and the context length k = k n tends to infinity “not too fast”. In the stochastic setting, where a doubly infinite sequence noiseless sequence x is a realization of a stationary process X , the DUDE asymptotically performs, in expectation, as well as the best denoiser, which has oracle access to the source distribution X . In the single-sequence, or “semi-stochastic” setting with a fixed doubly infinite sequence x , the DUDE asymptotically performs as well as the best “sliding window” denoiser, namely any denoiser that determines x ^ i from the window ( z i k , , z i + k ) , which has oracle access to x .

The discrete denoising problem

Let X be the finite alphabet of a fixed but unknown original “noiseless” sequence x n = ( x 1 , , x n ) X n . The sequence is fed into a discrete memoryless channel (DMC). The DMC operates on each symbol x i independently, producing a corresponding random symbol Z i in a finite alphabet Z . The DMC is known and given as a X -by- Z Markov matrix Π , whose entries are π ( x , z ) = P ( Z = z | X = x ) . It is convenient to write π z for the z -column of Π . The DMC produces a random noisy sequence Z n = ( z 1 , , z n ) Z n . A specific realization of this random vector will be denoted by z n . A denoiser is a function X ^ n : Z n X n that attempts to recover the noiseless sequence x n from a distorted version z n . A specific denoised sequence is denoted by x ^ n = X ^ n ( z n ) = ( X ^ 1 ( z n ) , , X ^ n ( z n ) ) . The problem of choosing the denoiser X ^ n is known as signal estimation, filtering or smoothing. To compare candidate denoisers, we choose a single-symbol fidelity criterion Λ : X × X [ 0 , ) (for example, the Hamming loss) and define the per-symbol loss of the denoiser X ^ n at ( x n , z n ) by

L X ^ n ( x n , z n ) = 1 n i = 1 n Λ ( x i , X ^ i ( z n ) ) .

Ordering the elements of the alphabet X by X = ( a 1 , , a | X | ) , the fidelity criterion can be given by a | X | -by- | X | matrix, with columns of the form

λ x ^ = ( Λ ( a 1 , x ^ ) Λ ( a | X | , x ^ ) ) .

Step 1: Calculating the empirical distribution in each context

The DUDE corrects symbols according to their context. The context length k used is a tuning parameter of the scheme. For k + 1 i n k , define the left context of the i -th symbol in z n by l k ( z n , i ) = ( z i k , , z i 1 ) and the corresponding right context as r k ( z n , i ) = ( z i + 1 , , z i + k ) . A two-sided context is a combination ( l k , r k ) of a left and a right context.

The first step of the DUDE scheme is to calculate the empirical distribution of symbols in each possible two-sided context along the noisy sequence z n . Formally, a given two-sided context ( l k , r k ) Z k × Z k that appears once or more along z n determines an empirical probability distribution over Z , whose value at the symbol z is

μ ( z n , l k , r k ) [ z ] = | { k + 1 i n k | ( z i k , , z i + k ) = l k z r k } | | { k + 1 i n k | l k ( z n , i ) = l k  and  r k ( z n , i ) = r k } | .

Thus, the first step of the DUDE scheme with context length k is to scan the input noisy sequence z n once, and store the length- | Z | empirical distribution vector μ ( z n , l k , r k ) (or its non-normalized version, the count vector) for each two-sided context found along z n . Since there are at most N n , k = min ( n , | Z | 2 k ) possible two-sided contexts along z n , this step requires O ( n ) operations and storage O ( N n , k ) .

Step 2: Calculating the Bayes response to each context

Denote the column of single-symbol fidelity criterion Λ , corresponding to the symbol x ^ X , by λ x ^ . We define the Bayes Response to any vector v of length | X | with non-negative entries as

X ^ B a y e s ( v ) = argmin x ^ X λ x ^ v .

This definition is motivated in the background below.

The second step of the DUDE scheme is to calculate, for each two-sided context ( l k , r k ) observed in the previous step along z n , and for each symbol z Z observed in each context (namely, any z such that l r z r k is a substring of z n ) the Bayes response to the vector Π μ ( z n , l k , r k ) π z , namely

g ( l k , z , r k ) := X ^ B a y e s ( Π μ ( z n , l k , r k ) π z ) .

Note that the sequence z n and the context length k are implicit. Here, π z is the z -column of Π and for vectors a and b , a b denotes their Schur (entrywise) product, defined by ( a b ) i = a i b i . Matrix multiplication is evaluated before the Schur product, so that Π μ π z stands for ( Π μ ) π z .

This formula assumed that the channel matrix Π is square ( | X | = | Z | ) and invertible. When | X | | Z | and Π is not invertible, under the reasonable assumption that it has full row rank, we replace ( Π ) 1 above with its Moore-Penrose pseudo-inverse ( Π Π ) 1 Π and calculate instead

g ( l k , z , r k ) := X ^ B a y e s ( ( Π Π ) 1 Π μ ( z n , l k , r k ) π z ) .

By caching the inverse or pseudo-inverse Π , and the values λ x ^ π z for the relevant pairs ( x ^ , z ) X × Z , this step requires O ( N k , n ) operations and O ( N k , n ) storage.

Step 3: Estimating each symbol by the Bayes response to its context

The third and final step of the DUDE scheme is to scan z n again and compute the actual denoised sequence X ^ n ( z n ) = ( X ^ 1 ( z n ) , , X ^ n ( z n ) ) . The denoised symbol chosen to replace z i is the Bayes response to the two-sided context of the symbol, namely

X ^ i ( z n ) := g ( l k ( z n , i ) , z i , r k ( z n , i ) ) .

This step requires O ( n ) operations and used the data structure constructed in the previous step.

In summary, the entire DUDE requires O ( n ) operations and O ( N k , n ) storage.

Asymptotic optimality properties

The DUDE is designed to be universally optimal, namely optimal (is some sense, under some assumptions) regardless of the original sequence x n .

Let X ^ D U D E n : Z n X n denote a sequence of DUDE schemes, as described above, where X ^ D U D E n uses a context length k n that is implicit in the notation. We only require that lim n k n = and that k n | Z | 2 K n = o ( n log n ) .

For a stationary source

Denote by D n the set of all n -block denoisers, namely all maps X ^ n : Z n X n .

Let X be an unknown stationary source and Z be the distribution of the corresponding noisy sequence. Then

lim n E [ L X ^ D U D E n ( X n , Z n ) ] = lim n min X ^ n D n E [ L X ^ n ( X n , Z n ) ] ,

and both limits exist. If, in addition the source X is ergodic, then

lim sup n L X ^ D U D E n ( X n , Z n ) = lim n min X ^ n D n E [ L X ^ n ( X n , Z n ) ] ,  almost surely .

For an individual sequence

Denote by D n , k the set of all n -block k -th order sliding window denoisers, namely all maps X ^ n : Z X of the form X ^ i ( z n ) = f ( z i k , , z i + k ) with f : Z 2 k + 1 X arbitrary.

Let x X be an unknown noiseless sequence stationary source and Z be the distribution of the corresponding noisy sequence. Then

lim n [ L X ^ D U D E n ( x n , Z n ) min X ^ n D n , k L X ^ n ( x n , Z n ) ] = 0 ,  almost surely .

Non-asymptotic performance

Let X ^ k n denote the DUDE on with context length k defined on n -blocks. Then there exist explicit constants A , C > 0 and B > 1 that depend on ( Π , Λ ) alone, such that for any n , k and any x n X n we have

A n B k E [ L X ^ k n ( x n , Z n ) min X ^ n D n , k L X ^ n ( x n , Z n ) ] k C n | Z | k ,

where Z n is the noisy sequence corresponding to x n (whose randomness is due to the channel alone) .

In fact holds with the same constants A , B as above for any n -block denoiser X ^ n D n . The lower bound proof requires that the channel matrix Π be square and the pair ( Π , Λ ) satisfies a certain technical condition.

Background

To motivate the particular definition of the DUDE using the Bayes response to a particular vector, we now find the optimal denoiser in the non-universal case, where the unknown sequence x n is a realization of a random vector X n , whose distribution is known.

Consider first the case n = 1 . Since the joint distribution of ( X , Z ) is known, given the observed noisy symbol z , the unknown symbol X X is distributed according to the known distribution P ( X = x | Z = z ) . By ordering the elements of X , we can describe this conditional distribution on X using a probability vector P X | z , indexed by X , whose x -entry is P ( X = x | Z = z ) . Clearly the expected loss for the choice of estimated symbol x ^ is λ x ^ P X | z .

Define the Bayes Envelope of a probability vector v , describing a probability distribution on X , as the minimal expected loss U ( v ) = min x ^ X v λ x ^ , and the Bayes Response to v as the prediction that achieves this minimum, X ^ B a y e s ( v ) = argmin x ^ X v λ x ^ . Observe that the Bayes response is scale invariant in the sense that X ^ B a y e s ( v ) = X ^ B a y e s ( α v ) for α > 0 .

For the case n = 1 , then, the optimal denoiser is X ^ ( z ) = X ^ B a y e s ( P X | z ) . This optimal denoiser can be expressed using the marginal distribution of Z alone, as follows. When the channel matrix Π is invertible, we have P X | z Π P Z π z where π z is the z -th column of Π . This implies that the optimal denoiser is given equivalently by X ^ ( z ) = X ^ B a y e s ( Π P Z π z ) . When | X | | Z | and Π is not invertible, under the reasonable assumption that it has full row rank, we can replace Π 1 with its Moore-Penrose pseudo-inverse and obtain

X ^ ( z ) = X ^ B a y e s ( ( Π Π ) 1 Π P Z π z ) .

Turning now to arbitrary n , the optimal denoiser X ^ o p t ( z n ) (with minimal expected loss) is therefore given by the Bayes response to P X i | z n

X ^ i o p t ( z n ) = X ^ B a y e s P X i | z n = argmin x ^ X λ x ^ P X i | z n ,

where P X i | z n is a vector indexed by X , whose x -entry is P ( X i = x | Z n = z n ) . The conditional probability vector P X i | z n is hard to compute. A derivation analogous to the case n = 1 above shows that the optimal denoiser admits an alternative representation, namely X ^ i o p t ( z n ) = X ^ B a y e s ( Π P Z i , z n i π z i ) , where z n i = ( z 1 , , z i 1 , z i + 1 , , z n ) Z n 1 is a given vector and P Z i , z n i is the probability vector indexed by Z whose z -entry is P ( ( Z 1 , , Z n ) = ( z 1 , , z i 1 , z , z i + 1 , , z n ) ) . Again, Π is replaced by a pseudo-inverse if Π is not square or not invertible.

When the distribution of X (and therefore, of Z ) is not available, the DUDE replaces the unknown vector P Z i , z n i with an empirical estimate obtained along the noisy sequence z n itself, namely with μ ( Z i , l k ( Z n , i ) , r k ( Z n , i ) ) . This leads to the above definition of the DUDE.

While the convergence arguments behind the optimality properties above are more subtle, we note that the above, combined with the Birkhoff Ergodic Theorem, is enough to prove that for a stationary ergodic source, the DUDE with context-length k is asymptotically optimal all k -th order sliding window denoisers.

Extensions

The basic DUDE as described here assumes a signal with a one-dimensional index set over a finite alphabet, a known memoryless channel and a context length that is fixed in advance. Relaxations of each of these assumptions have been considered in turn. Specifically:

  • Infinite alphabets
  • Channels with memory
  • Unknown channel matrix
  • Variable context and adaptive choice of context length
  • Two-dimensional signals
  • Application to image denoising

    A DUDE-based framework for grayscale image denoising achieves state-of-the-art denoising for impulse-type noise channels (e.g., "salt and pepper" or "M-ary symmetric" noise), and good performance on the Gaussian channel (comparable to the Non-local means image denoising scheme on this channel). A different DUDE variant applicable to grayscale images is presented in.

    Application to channel decoding of uncompressed sources

    The DUDE has led to universal algorithms for channel decoding of uncompressed sources.

    References

    Discrete Universal Denoiser Wikipedia