Rahul Sharma (Editor)

Two dimensional singular value decomposition

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

Two-dimensional singular value decomposition (2DSVD) computes the low-rank approximation of a set of matrices such as 2D images or weather maps in a manner almost identical to SVD (singular value decomposition) which computes the low-rank approximation of a single matrix (or a set of 1D vectors).

Contents

SVD

Let matrix X = ( x 1 , . . . , x n ) contains the set of 1D vectors which have been centered. In PCA/SVD, we construct covariance matrix F and Gram matrix G

F = X X T , G = X T X ,

and compute their eigenvectors U = ( u 1 , . . . , u n ) and V = ( v 1 , . . . , v n ) . Since V V T = I , U U T = I , we have

X = U U T X V V T = U ( U T X V ) V T = U Σ V T

If we retain only K principal eigenvectors in U , V , this gives low-rank approximation of X .

2DSVD

Here we deal with a set of 2D matrices ( X 1 , . . . , X n ) . Suppose they are centered i X i = 0 . We construct row–row and column–column covariance matrices

F = i X i X i T , G = i X i T X i

in exactly the same manner as in SVD, and compute their eigenvectors U and V . We approximate X i as

X i = U U T X i V V T = U ( U T X i V ) V T = U M i V T

in identical fashion as in SVD. This gives a near optimal low-rank approximation of ( X 1 , . . . , X n ) with the objective function

J = i | X i L M i R T | 2

Error bounds similar to Eckard-Young Theorem also exist.

2DSVD is mostly used in image compression and representation.

References

Two-dimensional singular value decomposition Wikipedia