Girish Mahajan (Editor)

Compressed sensing in speech signals

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

Compressed Sensing (CS) can be used to reconstruct sparse vector from less number of measurements, provided the signal can be represented in sparse domain. Sparse domain is a domain in which only a few measurements have non-zero values. Suppose a signal x R N can be represented in a domain where only M coefficients out of N (where M N ) are non zero, then the signal is said to be sparse in that domain. This reconstructed sparse vector can be used to construct back the original signal if the sparse domain of signal is known. CS can be applied to speech signal only if sparse domain of speech signal is known.

Consider a speech signal x , which can be represented in a domain Ψ such that x = Ψ α , where speech signal x R N , dictionary matrix Ψ R N × N and the sparse coefficient vector α R N . This speech signal is said to be sparse in domain Ψ , if number of significant (non zero) coefficients in sparse vector α are K , where K N .

The observed signal x is of dimension N × 1 . To reduce the complexity for solving α using CS speech signal is observed using a measurement matrix Φ such that


where y R M , and measurement matrix Φ R M × N such that M N .

Sparse decomposition problem for eq. 1 can be solved as standard l 1 minimization as

If measurement matrix Φ satisfies the restricted isometric property (RIP) and is incoherent with dictionary matrix Ψ . then the reconstructed signal is much closer to the original speech signal.

Different types of measurement matrices like random matrices can be used for speech signals. Estimating the sparsity of speech signal is a problem since speech signal highly varies over time and thus sparsity of speech signal also varies highly over time. If sparsity of speech signal can be calculated over time without much complexity that will be best. If this is not possible then worst-case scenario for sparsity can be considered for a given speech signal.

Sparse vector ( α ^ ) for a given speech signals is reconstructed from less number of measurements ( y ) using l 1 minimization. Then original speech signal is reconstructed form the calculated sparse vector α ^ using the fixed dictionary matrix as Ψ as x ^ = Ψ α ^ .

Estimation of both the dictionary matrix and sparse vector from just random measurements only has been done iteratively in. The speech signal reconstructed from estimated sparse vector and dictionary matrix is much closer to the original signal. Some more iterative approaches to calculate both dictionary matrix and speech signal from just random measurements of speech signal are shown in. Th application of structured sparsity for joint speech localization-separation in reverberant acoustics has been investigated for multiparty speech recognition. Further applications of the concept of sparsity are yet to be studied in the field of speech processing. The idea behind CS for speech signals is that can we come up with some algorithms or methods where we only use those random measurements ( y ) ) to do some application-based processing like speaker recognition, speech enhancement, etc.

References

Compressed sensing in speech signals Wikipedia