Loading...
 
PDF Print

The ML-EM algorithm for emission tomography

The ML-EM algorithm for emission tomography

 
In emission tomography the yj number of detected photons in the jth LoR follows a Poisson distribution:
$   \wp \left ( y_{j} \right )=\frac{\left [Ey_{j}  \right ] ^{ y_{j}}e^{-Ey_{j}}}{y_{j}!}

It would be simpler, moreover trivial using an ML estimate if we knew from which voxel the counts in a LoR came from. Let us then complete the statistical model to an s data structure, where the measured data in the i.th LoR can be sorted according to which voxel it came from:

$ y_{i}=\sum_{m=1}^{M}s_{im}
Our equation in matrix form:
$ E\left [s_{im}  \right ]=A_{im}x_{m}

Let us construct the pdf for the completed dataset:
$  \ln \wp\left ( \mathbf{s};\mathbf{x} \right )= \ln \prod_{i=1}^{N}\prod_{j=1}^{M}\frac{\left [Es_{im}  \right ] ^{ s_{im}}e^{-Es_{im}}}{Es_{im}!} =\sum_{i}^{N}\sum_{m}^{M}s_{im}\ln \left ( A_{im}x_{m} \right )-A_{im}x_{m}-\ln \left ( s_{im}! \right )(1)

The E step:

The E step as we have seen in the previous section:
$  Q\left ( \mathbf{x},\mathbf{x}^{n} \right )=E\left [\wp \left ( \mathbf{s}; \mathbf{x} \right )\mid\mathbf{y} ;\mathbf{x}^{n} ]
For this expectation we need to construct the following
$ \wp \left ( \mathbf{s} \mid\mathbf{y} ;\mathbf{x}^{n} \right ) pdf.
The yi counts belong with probability pim to voxel m. taht means binomial distribution with yj number of trials (actually the distribution is multinomial that have the same expectation as the binomial):
$ p(s_{im}\mid y_{i};\mathbf{x}^{n})=\binom{y_{i}}{s_{im}}p_{im}^{s_{im}}\left ( 1-p_{im} \right )^{s_{im}}

The pim probabilities are modeled using the parameter estimates of the n. iteration:
$ p_{im}=\frac{A_{im}x_{m}^{n}}{\sum_{m}^{M}A_{im}x_{m}^{n}}
Let us calculate the expectation of Eq. (1) ! There is only one term depending on sim and the dependence is simply linear. Now we have to evaluate the following expectation:
$ Es_{im}=y_{i}p_{im}
, that is a basic property of the binomial distribution.

The M step:

Substituted we can look at the form of Eq. (1) in the present state for maximization:
$ \sum_{i}^{N}\sum_{m}^{M}y_{i}p_{im}\ln \left ( A_{im}x_{m} \right )-A_{im}x_{m}-E\ln \left ( s_{im}! \right ) (2)

Calculating the last expectation on the RHS is not necessary as only values of the n. iteration are present in the expectation, therefore it is not a variable when looking for the maximum.Taking the derivative of Eq. (2) according to the xj components of the parameters:
$ \partial _{x_{j}}\sum_{i}^{N}\sum_{m}^{M}y_{i}p_{im}\ln \left ( A_{im}x_{m} \right )-A_{im}x_{m}-E\ln \left ( s_{im}! \right )=0
thus:
$  \sum_{i}^{N}\sum_{m}^{M}y_{i}p_{im}\frac{1}{A_{im}x_{m}}A_{im}\delta _{jm}-A_{im}\delta _{jm}=0
Summing up on n:
$ \sum_{i}^{N}y_{i}p_{ij}\frac{1}{x_{j}}-A_{ij}=0
After manipulations and substitutions:
$ x_{j}^{n+1}=\frac{\sum_{i}^{N}y_{i}p_{ij}}{\sum_{i}^{N}A_{ij}}=x_{j}^{n}\frac{1}{\sum_{i}^{N}A_{ij}}\frac{\sum_{i}^{N}y_{i}A_{ij}}{\sum_{m}^{M}A_{im}x_{m}^{n}}

The iteration scheme "corrects" the previous concentration estimate with a correction factor to the next step. The correction factor is given by taking the measured data divided by counts produced by using the previous estimate and this difference is than distributed ("backprojected") into the voxels according to the system response matrix.

 


Site Language: English

Log in as…