Date: Thu, 18 July 2024 21:13:45 +00:00
Mime-Version: 1.0 (Produced by Tiki)
Content-Type: application/x-tikiwiki;
pagename=Example%20to%20a%20Maximum%20Likelihood%20estimation;
flags="";
author=david.legrady;
version=2;
lastmodified=1329572515;
author_id=188.142.216.4;
summary="";
hits=3504;
description="";
charset=utf-8
Content-Transfer-Encoding: binary
!Example to a Maximum Likelihood estimation
Let us take ''N'' number of y{SUB()}i{SUB} measured datapoint, that are indepedent but were produced by the same process which can be described by a normal distribution {EQUATION(size="70")}$ \wp \left ( y_{i}\mid\mu ,\sigma \right) =\frac{1}{\sqrt{2\pi \sigma ^{2}}} e^{-\frac{\left ( y_{i}-\mu \right )^{2}}{2\sigma^{2}}} {EQUATION} with {EQUATION(size="70")}$ \mu {EQUATION} expected value, and{EQUATION(size="70")}$ \sigma {EQUATION} variance.
The joint probability density function of the independent y{SUB()}i{SUB} events is the product of the individual pdf's:
{EQUATION(size="70")}$ \wp \left ( \mathbf{y}\mid\mu ,\sigma \right) =\prod_{i=1}^{N}\frac{1}{\sqrt{2\pi \sigma ^{2}}} e^{-\frac{\left ( y_{i}-\mu \right )^{2}}{2\sigma^{2}}}=\left [\frac{1}{\sqrt{2\pi \sigma ^{2}}} \right ]^{N} e^{-\sum_{i=1}^{N}\frac{\left ( y_{i}-\mu \right )^{2}}{2\sigma^{2}}} {EQUATION}
The log-likelihood function reads:
{EQUATION(size="70")}$ \ln \wp \left ( \mathbf{y}\mid\mu ,\sigma \right) =\frac{N}{2}\ln \left ( \frac{1}{2\pi \sigma ^{2}} \right )-\sum_{i=1}^{N}\frac{\left ( y_{i}-\mu \right )^{2}}{2\sigma^{2}} {EQUATION}
We now look for the maximum by differentiating and setting the result equal to zero:
{EQUATION(size="70")}$ \partial _{\mu } \ln \wp \left ( \mathbf{y}\mid\mu ,\sigma \right) =-\sum_{i=1}^{N}\frac{\left ( y_{i}-\mu \right )}{\sigma^{2}}=0 {EQUATION}
from that our estimation for {EQUATION(size="70")}$ \mu {EQUATION} is:
{EQUATION(size="70")}$ \widehat{\mu}=\sum_{i=1}^{N}\frac{ y_{i} }{N} {EQUATION}
the average of the measured data. The partial derivatives according to {EQUATION(size="70")}$ \sigma {EQUATION}:
{EQUATION(size="70")}$ \partial _{\sigma }\ln \wp \left ( \mathbf{y}\mid\mu ,\sigma \right) =-\frac{N}{\sigma}-\sum_{i=1}^{N}\frac{\left ( y_{i}-\mu \right )^{2}}{\sigma^{3}}=0{EQUATION}
from that:
{EQUATION(size="70")}$ \widehat{\sigma}^{2}=\sum_{i=1}^{N}\frac{\left ( y_{i}-\mu \right )^{2}}{N} {EQUATION}
For completeness we mention that the formula for the variance we still have an unknown {EQUATION(size="70")}$ \mu {EQUATION}, and we may get tempted to estimate it by {EQUATION(size="70")}$\widehat{\mu} {EQUATION}. As it is well known this would result in a biased estimate, that can be corrected by a factor of ''N/(N-1)''.
Now we proceed with estimation theory and ((Statisztikai alapú képrekonstrukciós stratégiák|#Statisztikai alapú képrekonstrukciós stratégiák|the Bayesian))estimators.