Date: Fri, 1 Dec. 2023 22:26:42 +00:00
Mime-Version: 1.0 (Produced by Tiki)
Content-Type: application/x-tikiwiki;
pagename=The%20ML-EM%20algorithm%20for%20emission%20tomography;
flags="";
author=david.legrady;
version=3;
lastmodified=1329561231;
author_id=188.142.216.4;
summary="";
hits=4235;
description="";
charset=utf-8
Content-Transfer-Encoding: binary
!The ML-EM algorithm for emission tomography
In emission tomography the y{SUB()}j{SUB} number of detected photons in the jth LoR follows a Poisson distribution:
{EQUATION(size="70")}$ \wp \left ( y_{j} \right )=\frac{\left [Ey_{j} \right ] ^{ y_{j}}e^{-Ey_{j}}}{y_{j}!} {EQUATION}
It would be simpler, moreover trivial using an ML estimate if we knew from which voxel the counts in a LoR came from. Let us then complete the statistical model to an __s__ data structure, where the measured data in the i.th LoR can be sorted according to which voxel it came from:
{EQUATION(size="70")}$ y_{i}=\sum_{m=1}^{M}s_{im} {EQUATION}
Our equation in matrix form:
{EQUATION(size="70")}$ E\left [s_{im} \right ]=A_{im}x_{m} {EQUATION}
Let us construct the pdf for the completed dataset:
{EQUATION(size="70")}$ \ln \wp\left ( \mathbf{s};\mathbf{x} \right )= \ln \prod_{i=1}^{N}\prod_{j=1}^{M}\frac{\left [Es_{im} \right ] ^{ s_{im}}e^{-Es_{im}}}{Es_{im}!} =\sum_{i}^{N}\sum_{m}^{M}s_{im}\ln \left ( A_{im}x_{m} \right )-A_{im}x_{m}-\ln \left ( s_{im}! \right ) {EQUATION}(1)
!!The E step:
The E step as we have seen in ((Az ML-EM algoritmus|the previous section)):
{EQUATION(size="70")}$ Q\left ( \mathbf{x},\mathbf{x}^{n} \right )=E\left [\wp \left ( \mathbf{s}; \mathbf{x} \right )\mid\mathbf{y} ;\mathbf{x}^{n} ]{EQUATION}
For this expectation we need to construct the following
{EQUATION(size="70")}$ \wp \left ( \mathbf{s} \mid\mathbf{y} ;\mathbf{x}^{n} \right ) {EQUATION} pdf.
The y{SUB()}i{SUB} counts belong with probability p{SUB()}im{SUB} to voxel m. taht means binomial distribution with y{SUB()}j{SUB} number of trials (actually the distribution is multinomial that have the same expectation as the binomial):
{EQUATION(size="70")}$ p(s_{im}\mid y_{i};\mathbf{x}^{n})=\binom{y_{i}}{s_{im}}p_{im}^{s_{im}}\left ( 1-p_{im} \right )^{s_{im}} {EQUATION}
The p{SUB()}im{SUB} probabilities are modeled using the parameter estimates of the n. iteration:
{EQUATION(size="70")}$ p_{im}=\frac{A_{im}x_{m}^{n}}{\sum_{m}^{M}A_{im}x_{m}^{n}} {EQUATION}
Let us calculate the expectation of Eq. (1) ! There is only one term depending on s{SUB()}im{SUB} and the dependence is simply linear. Now we have to evaluate the following expectation:
{EQUATION(size="70")}$ Es_{im}=y_{i}p_{im} {EQUATION}
, that is a basic property of the binomial distribution.
!! The M step:
Substituted we can look at the form of Eq. (1) in the present state for maximization:
{EQUATION(size="70")}$ \sum_{i}^{N}\sum_{m}^{M}y_{i}p_{im}\ln \left ( A_{im}x_{m} \right )-A_{im}x_{m}-E\ln \left ( s_{im}! \right ) {EQUATION} (2)
Calculating the last expectation on the RHS is not necessary as only values of the n. iteration are present in the expectation, therefore it is not a variable when looking for the maximum.Taking the derivative of Eq. (2) according to the x{SUB()}j{SUB} components of the parameters:
{EQUATION(size="70")}$ \partial _{x_{j}}\sum_{i}^{N}\sum_{m}^{M}y_{i}p_{im}\ln \left ( A_{im}x_{m} \right )-A_{im}x_{m}-E\ln \left ( s_{im}! \right )=0 {EQUATION}
thus:
{EQUATION(size="70")}$ \sum_{i}^{N}\sum_{m}^{M}y_{i}p_{im}\frac{1}{A_{im}x_{m}}A_{im}\delta _{jm}-A_{im}\delta _{jm}=0{EQUATION}
Summing up on n:
{EQUATION(size="70")}$ \sum_{i}^{N}y_{i}p_{ij}\frac{1}{x_{j}}-A_{ij}=0
{EQUATION}
After manipulations and substitutions:
{EQUATION(size="70")}$ x_{j}^{n+1}=\frac{\sum_{i}^{N}y_{i}p_{ij}}{\sum_{i}^{N}A_{ij}}=x_{j}^{n}\frac{1}{\sum_{i}^{N}A_{ij}}\frac{\sum_{i}^{N}y_{i}A_{ij}}{\sum_{m}^{M}A_{im}x_{m}^{n}} {EQUATION}
The iteration scheme "corrects" the previous concentration estimate with a correction factor to the next step. The correction factor is given by taking the measured data divided by counts produced by using the previous estimate and this difference is than distributed ("backprojected") into the voxels according to the system response matrix.