Date: Fri, 21 June 2024 04:36:06 +00:00
Mime-Version: 1.0 (Produced by Tiki)
Content-Type: application/x-tikiwiki;
pagename=Non-statistical%20iterative%20reconstructions;
flags="";
author=david.legrady;
version=4;
lastmodified=1329572480;
author_id=188.142.216.4;
summary="";
hits=4839;
description="";
charset=utf-8
Content-Transfer-Encoding: binary
With the ((A képrekonstrukció diszkrét bázisa|#A képrekonstrukció diszkrét bázisa|previously introduced notations)), the fundamental equation of the image reconstruction reads:
{EQUATION(size="70")}$\mathbf{y}=\mathbf{A}\mathbf{x} {EQUATION}
Do not forget that the __A__ matrix is a sparse matrix, not sqare and the number of its elements may be around 10{SUP()}12{SUP}.
!A Moore-Penrose generalized/psuedoinverse
As the matrix __A__ is not square, most of the common methods for solving the equality (pl. Gauss-Siedel iteration) cannot directly be applied. Either when our system is overdetermined or underdetermined the criteria for the solution can be stated as:
{EQUATION(size="70")}$ \underset{\mathbf{x}}{min}\left \| \mathbf{y}-\mathbf{A}\boldsymbol{\mathbf{x}} \right \| {EQUATION}
where ~np~||.||~/np~ is a norm of our choice. Let us assume that {EQUATION(size="70")}$ \widetilde{\mathbf{x}} {EQUATION} minimizes the above expression and let us choose the scalar product for the norm. Now for an arbitrary
{EQUATION(size="70")}$ \mathbf{v} {EQUATION} vector and a ''t'' real number it must be true that
{EQUATION(size="70")}$ \partial _{t}\left \| \left (\mathbf{y}-\mathbf{A}\left (\widetilde{\mathbf{x}} -t\mathbf{v} \right ) \right )\right \|\mid _{t=0} =0 {EQUATION}
Calculating the derivative
{EQUATION(size="70")}$ \mathbf{A}\mathbf{v} \left (\mathbf{y}-\mathbf{A}\left (\widetilde{\mathbf{x}} -t\mathbf{v} \right ) \right )\mid _{t=0} =0{EQUATION}
thus
{EQUATION(size="70")}$ \mathbf{v} \mathbf{A}^{T}\left (\mathbf{y}-\mathbf{A}\widetilde{\mathbf{x}} \right ) =0 {EQUATION}
This must be true for every __v__ so
{EQUATION(size="70")}$ \mathbf{A}^{T}\mathbf{y}= \mathbf{A}^{T}\mathbf{A}\widetilde{\mathbf{x}}{EQUATION}
The marix __A__{SUP()}T{SUP}__A__ is a square matrix now,and if it is invertible:
{EQUATION(size="70")}$ \widetilde{\mathbf{x}}= \left (\mathbf{A}^{T}\mathbf{A} \right )^{-1}\mathbf{A}^{T}\mathbf{y} {EQUATION}
The __A__{SUP()}+{SUP} Moore-Penrose psudoinverse:
{EQUATION(size="70")}$ \mathbf{A}^{+}= \left (\mathbf{A}^{T}\mathbf{A} \right )^{-1}\mathbf{A}^{T} {EQUATION}
We can use for the calculationof __A__{SUP()}+{SUP} an UV decomposition, or any other regular method, can be calcuated by a simple though not too efficient iteration (Ben-Israel and Cohen iteration):
{EQUATION(size="70")}$\mathbf{A}_{(k)}^{+}=2\mathbf{A}_{(k-1)}^{+} +\mathbf{A}_{(k-1)}^{+}\mathbf{A}_{}\mathbf{A}_{(k-1)}^{+} {EQUATION}
!A Kaczmarz iteration
The system response matrix of currently available PET, SPECT and CT scanners is large compared to the easily accessable computer hardware, or/and computations on sparse matrices are not very efficient. A simple, stable and fast algorithm is the Kaczmarz iteration. The underlying idea is that a ''n''x''m'' matrix __A__ is actually a
''n'' number of equation of ''m''dimensional hyperplanes and their crossing point is the __x__ solution. If the system is overdetermined and some noise is added, perhaps the intersection point will not be really a point. If it is underdetermined it is possible that we can narrow down the solution only to a certain domain.
The essence of the Kaczmarz-iteráció is that we are trying to tend towards the domain of the solution by projecting perpendicularly to the hyperplane given as a single line of the matrix our previous __x__ guess for a solution.
For example let us take a 2x2-es __A__ matrix with elements a{SUB()}ii{SUB}.
Now the following two-equations system is given:
{EQUATION(size="70")}$ \left.\begin{matrix}
y_{1}=a_{11}x_{1}+a_{12}x_{1}
\\
y_{2}=a_{21}x_{2}+a_{22}x_{2}
\end{matrix}\right\} {EQUATION}
where the crossing point is our solution. We have sketched on the graph the first 4 points of the iterations.
{img fileId="450" thumb="mousesticky" height="200px" width="200px" desc="The Kaczmarz iteration in two dimensions for 4 steps"}
In this 2D example it can be readily seen that
* if the lines were perpendicular to each other i.e. the lines of the matrix were linearly indipendent, we would get the exact solution in two steps
* if the lines were parallel, i.e. the set of eqations were contraversial the iteration would oscillate between the two lines
* if the two lines were inclined only very narrowly, i.e. the rows were much correlated with each other the iteration would converge very slowly. The more independent they are the faster the method converges
A serious advantage of the Kaczmarz iteration that we only use a single line of the matrix for an iteration step, we do not need to handle the whole matrix together.
Now let us look at the steps of the projection now for a general case. Index ''i'' nominates which line of the matrix is being used, which row vector comes to play.
Let __x__{SUP()}k{SUP} be teh solution obtained in the ''k''-th step. Inthe ''k+1''.
iteration step we should project this point on the hyperplane determined by equation ''i''.The equation of this hyperplane reads:
{EQUATION(size="70")}$ \mathbf{A}^{i}\mathbf{x}=y_{i} {EQUATION}
((Egyenes és egyéb lineáris geometria elemek reprezentációja|#Egyenes és egyéb lineáris geometria elemek reprezentációja|Earlier we hav seen)), that such an equation describes a plane perpendicular to the direction vector {EQUATION(size="70")}$ \mathbf{A}^{i}/\sqrt{\mathbf{A}^{i}\mathbf{A}^{i}}{EQUATION} , with distance from the origin of {EQUATION(size="70")}$ y_{i}/\sqrt{\mathbf{A}^{i}\mathbf{A}^{i}}{EQUATION}. So, when in the kth step
we would like to project our solution to the ''i''-th hyperplane, we need to go in the direction of {EQUATION(size="70")}$ \mathbf{A}^{i}/\sqrt{\mathbf{A}^{i}\mathbf{A}^{i}}{EQUATION} , according to the convention of distance {EQUATION(size="70")}$ -\alpha{EQUATION}.For simpler formulas let us include in this expression the normalisation of the unit vector, so now we can write the for next point:{EQUATION(size="70")}$ \mathbf{x}^{k+1}=\mathbf{x}^{k}-\alpha \mathbf{A}^{i} {EQUATION}
The next point must be on the ''i''. next hyperplane, therefore:
{EQUATION(size="70")}$ \mathbf{A}^{i}\left (\mathbf{x}^{k}-\alpha \mathbf{A}^{i} \right )=y_{i}{EQUATION}
Ebből kifejezhető {EQUATION(size="70")}$ \alpha{EQUATION}
{EQUATION(size="70")}$ \alpha=\frac{\mathbf{A}^{i}\mathbf{x}^{k}-y_{i}}{\mathbf{A}^{i}\mathbf{A}^{i}} {EQUATION}
The solution in the ''k+1''. iteration steps therefore reads:
{EQUATION(size="70")}$ \mathbf{x}^{k+1}=\mathbf{x}^{k}-\frac{\mathbf{A}^{i}\mathbf{x}^{k}-y_{i}}{\mathbf{A}^{i}\mathbf{A}^{i}} \mathbf{A}^{i}{EQUATION}
The Kaczmarz-iteration in the medical diagnostic literature is mentioned also as ART (Algebraic Reconstruction Technique). The method is simple, but when applied only advanced versions are used. It is interesting that the Kaczmarz-iteration can be seen as an extremely low subset version of the OSEM iteration scheme.
((Statisztikai alapú képrekonstrukciós stratégiák|#Statisztikai alapú képrekonstrukciós stratégiák|Our next chapter)) would extend the matrix equation to probalistic interpretation of the elements and the measure of the distance.