Date: Thu, 18 July 2024 21:34:30 +00:00
Mime-Version: 1.0 (Produced by Tiki)
Content-Type: application/x-tikiwiki;
pagename=Statistical%20image%20reconstruction%20strategies;
flags="";
author=david.legrady;
version=6;
lastmodified=1329560901;
author_id=188.142.216.4;
summary="";
hits=3513;
description="";
charset=utf-8
Content-Transfer-Encoding: binary
Let us add to our ((Nem statisztikai iteratív megoldások|earlier)) equation the assumption that the linear system of equations are only valid as an expected {EQUATION(size="70")}$ E {EQUATION} value of some statistical quantities:
{EQUATION(size="70")}$E\left ( \mathbf{y} \right )=\mathbf{A}\mathbf{x} {EQUATION}
thus the incoming information, the measured data is not "carved into stone", exact values but -even when no noise is present- it can take "legally" multiple values while the __x__ unknowns do not change. The linearity of the operator projecting the unknowns to the space of the measure data can be also interpreted as statistical independence, e.g. in emission tomography the radioactive decays happening in one voxel are independent of the decays happening in the adjacent voxel, while for transmission tomography linearity only holds if we disregard spectral changes.
Let us generalize our model where the acquired data and the unknowns are connected by a probability density function (pdf) describing the probability of obtaining __y__ data given the unknowns take a value __x__; let us choose the notation for this pdf as {EQUATION(size="70")}$\wp \left ( \mathbf{y};\mathbf{x} \right ) {EQUATION}. When also __x__ is regarded as a random variable, we should include that in the notation: {EQUATION(size="70")}$\wp \left ( \mathbf{y}\mid\mathbf{x} \right ) {EQUATION}
!! Maximum-Likelihood estimation
A basic statistical estimation theory tool is the Maximum-Likelihood estimation, when we look for the __x__ parameters where a given measured data would occur with the maximum likelihood, i.e. our estimation could be formalized as
{EQUATION(size="70")}$ \mathbf{\widehat{x}}=\arg \underset{\mathbf{x}}{\max} \wp \left ( \mathbf{y}\mid \mathbf{x} \right ) {EQUATION}
The majority of the measurable physical quantities can be modeled using the exponential distribution family therefore the maximization criteria is set to the logarithm of the likelihood function {EQUATION(size="70")}$\wp \left ( \mathbf{y}\mid\mathbf{x} \right ) {EQUATION} .
The Maximum Likelihood estimation is in theory and practice both a very basic mathematical tool that we are not going to dwell on but only show an example for those wishing to brush up their memories.
Example for teh Maximum Likelihood estimator can be found ((Példa Maximum-Likelihood becslésre|on a separate page)).
!! Bayes estimation and the Maximum Aposteriori criteria
Now let us look at the __x__ model parameters as random variables, and look at the probability that if we acquired __y__ measured data what would the pdf of the parameters be:
{EQUATION(size="70")}$\wp \left ( \mathbf{x}\mid\mathbf{y} \right ) {EQUATION}
The Bayes-estimation maximizes this probability, that is, how the reply is given to the question of'' what would be the most probable parameter set given the measured data?''. This probability can be considered as aposteriori, i.e. post-measurement pdf, hence is the name Maximum Aposteriori (MAP) technique.
In mathematical symbols using the Bayes-theorem:
{EQUATION(size="70")}$ \mathbf{\widehat{x}}=\arg \underset{\mathbf{x}}{\max} \wp \left ( \mathbf{x}\mid \mathbf{y} \right )=\arg \underset{\mathbf{x}}{\max} \frac{\wp \left ( \mathbf{y}\mid \mathbf{x} \right )}{\wp \left ( \mathbf{y} \right )}\wp \left ( \mathbf{x} \right )=\arg \underset{\mathbf{x}}{\max}\left [ \ln \wp \left ( \mathbf{y}\mid \mathbf{x} \right )+\ln \wp \left ( \mathbf{x} \right ) \right ]{EQUATION}
The first maximization term is already known to us from the Maximum-Likelihood estimation, the second depends on the pdf of the parameters this one contains our apriori knowledge, and is called the prior. Its interpretation is that the parameters cannot take any arbitrary value since the activity concentration value for example can never exceed the activity concentration of the radiopharmacon administered.
Our pre-examination knowledge may be very hard to mathematically formulate. In practice the priors are very simple conditions like the continuity or smoothness of the spatial distribution of __x__. As the ''ln'' functionis often applied the priors are also given often in an exponential form, called Gibbs-priors:
{EQUATION(size="70")}$ \wp \left ( \mathbf{x} \right )=e^{-\gamma U\left ( \mathbf{x} \right )} {EQUATION}
For example if our apriori knowledge is that if a voxel has a certain reconstructed value than its neighbors should not be much different, the ''U(__x__)'' potential a can be proportional with the distance of the voxel from the voxel in question.
!! Maximum-Likelihood Expectation Maximisation (ML-EM ) algorithm
An extension of the Maximum Likelihood method is the iterative Maximum-Likelihood Expectation Maximisation method with involving virtual random variables in the modelling that increases modeling accuracy while simplifies the maximisation procedure. The ML-EM method can also involve priors, then it is called the MAP-EM method.
As the ML-EM method is a fundamental tool in medical imaging we will go into details in the ((Az ML-EM algoritmus|next section)).