Samiksha Jaiswal (Editor)

Local inverse

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

The local inverse is a kind of inverse function or matrix inverse used in image and signal processing, as well as other general areas of mathematics.

Contents

The concept of local inverse came from interior reconstruction of the CT image. One of the interior reconstruction methods was done through that first approximately reconstructs the image outside the ROI (region of interest) and then subtracts the re-projection data of the image at outside the ROI from the original projection data; then the above created data are used to make a new reconstruction. This idea can be widened to inverse. Instead of directly making an inverse, the unknowns at the outside of the local region can be first inverted. Recalculate the data from these unknowns (at outside the local region). Subtract this recalculated data from the original data, then the inverse for the unknowns inside the local region is done through the above newly produced data.

This concept is a direct extension of local tomography, generalized inverse and iterative refinement method. It is used to solve the inverse problem with incomplete input data, similar to local tomography. However this concept of local inverse also can be applied to complete input data.

Local inverse for full field of view system or over-determined system

[ f g ] = [ A B C D ] [ x y ]

Assume there is E , F , G and H that satisfies,

[ E F G H ] [ A B C D ] = J

Here J is not equal to I . J is close to I . I is identical matrix. Examples of this kind of matrix [ E F G H ] are for example, filtered back-projection method for image reconstruction, the inverse with regularization. In this case an approximate solution can be found as following,

and

[ x 0 y 0 ] = [ E F G H ] [ f g ]

A better solution for x 1 can be found as following,

[ x 1 y 1 ] = [ E F G H ] [ f B y 0 g D y 0 ]

In the above formula y 1 is useless, hence

x 1 = E ( f B y 0 ) + F ( g D y 0 )

In the same way, there is

y 1 = G ( f A x 0 ) + H ( g C x 0 )

In the above the solution is only divided to as two parts. x is inside the ROI(region of Interest) y is at outside of ROI. f is inside of FOV(field of view) y is outside of FOV.

The two parts can be extended to many parts, in this case, the extended method is referred as the sub-region iterative refinement method method

Local inverse for Limited field of view system or under-determined system

[ f g ] = [ A B C D ] [ x y ]

Assume A , B , C , D are known matrices; x and y are unknown vectors; f is known vector; g is unknown vector. It is interested to know the vector x. What is the better solution?

Assume the above matrix inverse exists [ E F G H ] {\displaystyle {\begin{bmatrix}E&F\\G&H\end{bmatrix}}}

[ A B C D ] [ E F G H ] = J

Here J = I or J is close to I . The local inverse algorithm is as follows:

(1) g e x . An extrapolated g function is obtained by

g e x | G = f | F

(2) y 0 . An approximate y function is calculated by

y 0 = H g e x

(3) y . A correction for y is done by

y = y 0 + y c o

(4) f . A corrected function for f is calculated by

f = f B y

(5) g 1 e x . An extrapolated g function is obtained by

g 1 e x | G = f | F

(6) x 1 . A local inverse solution is obtained

x 1 = E f + F g 1 e x

In the above algorithm, there are two time extrapolations for g functions which are used to overcome the data truncation problem. There is a correction for y . This correction can be a constant correction to correct the DC values of y function or a linear correction according to the prior knowledges about the y function. This algorithm can be found in reference.

In the example of reference, it is found that y = y 0 + y c o = k y 0 , here k = 1.04 . In that example the constant correction is made. More complicated correction can be made, for example linear correction, which perhaps achieves better results.

A + B {\displaystyle A^{+}B} is close to 0 {\displaystyle 0}

Shuang-ren Zhao defined a Local inverse to solve the above problem. First consider the simplest solution.

f = A x + B y

or

A x = f B y = f

Here f = f B y is the correct data in which there is no the influence of the object function in outside. From this data it is easy to get correct solution,

or

x = A 1 f

Here x is a correct(or exact) solution of the unknown x , that means x = x . In case that A is not a square matrix or it has no inverse, generalized inverse can applied,

x = A + ( f B y ) = A + f

Since y is unknown, if it is set to 0 , a approximate solution is obtained.

x 0 = A + f

On the above solution the result x 0 is related to the unknown vector y . Since y can be any values, this way the result x 0 has very strong artifacts which is

e r r o r 0 = | x 0 x | = | A + B y |

This kind of artifact is referred as truncation artifacts in the field of CT image reconstruction. In order to minimize the above artifacts of the solution, a special matrix Q is considered, which satisfies

Q B = 0

Hence,

Q A x = Q f Q B y = Q f

solve the above equation with Generalized inverse

x 1 = [ Q A ] + Q f = [ A ] + Q + Q f

Here Q + is generalized inverse of the matrix Q . x 1 is a solution for x . It is easy to find a matrix Q which satisfy Q B = 0 , Q can be written as following:

Q = I B B +

This kind of matrix Q is referred as transverse projection of matrix B

Here B + is the generalized inverse of the matrix B . B + satisfies

B B + B = B

It can be proven that

Q B = [ I B B + ] B = B B B + B = B B = 0

It is easy to prove that Q Q = Q

Q Q = [ I B B + ] [ I B B + ] = I 2 B B + + B B + B B + = I 2 B B + + B B + = I B B + = Q

and hence

Q Q Q = ( Q Q ) Q = Q Q = Q

Hence Q is also the generalized inverse of Q

That means

Q + Q = Q Q = Q

Hence,

x 1 = A + [ Q ] + Q f = A + Q f

or

x 1 = [ A ] + [ I B B + ] f

The matrix

A L = [ A ] + [ I B B + ]

A L is referred as the local inverse of Matrix [ A B C D ] . Using local inverse instead of generalized inverse or inverse can avoid the artifacts from unknown input data. Considering,

[ A ] + [ I B B + ] f = [ A ] + [ I B B + ] ( f B y ) = [ A ] + [ I B B + ] f

Hence there is,

x 1 = [ A ] + [ I B B + ] f

Hence x 1 is only related the correct data f . This kind error can be calculated as

e r r o r 1 = | x 1 x | = | [ A ] + B B + f |

This kind error are called bowl effect. Bowl effect does not related the unknown object y , it is only related the correct data f

In case the contribution of [ A ] + B B + f to x are smaller than that of [ A ] + B y , or

e r r o r 1 e r r o r 0

the local inverse solution x 1 is better than x 0 for this kind of inverse problem. Using x 1 instead of x 0 , the truncation artifacts are replaced as bowl effect. This result is same as local tomography, hence local inverse is a direct extension of the concept of the local tomography.

It is well known that the solution of the generalized inverse is a minimal L2 norm method. From the above derivation it is clear that the solution of local inverse is a minimal L2 norm method subject to the condition that the influence of unknown object y is 0 . Hence the local inverse is also an direct extension of the concept of the generalized inverse.

References

Local inverse Wikipedia