Maximum Likelihood Estimation
As before, we begin with a sample of random variables chosen according to one of a family of probabilities
. In addition,
,
will be used to denote the density function for the data when
is the true state of nature.
Definition 1. The likelihood function is the density function regarded as a function of
The MLE is, .
Note that if is a maximum likelihood estimator for
, then
is a maximum likelihood estimator for
. For example, if
is a parameter for the variance and
is the maximum likelihood estimator, then
is the maximum likelihood estimator for the standard deviation. This flexibility in the estimation criterion seen here is not available in the case of unbiased estimators. Typically, maximizing the score function
will be easier.
Bernoulli Trials. If the experiment consists of n Bernoulli trial with a success probability, then
,
,
.
This equals zero when . Check that this is a maximum. Thus,
.
Normal Data. Maximum likelihood estimation can be applied to a vector-valued parameter. For a simple random sample of n normal random variables,
.
.
Because the second partial derivative with respect to is negative,
is the maximum likelihood estimator.
.
Recalling that , we obtain
. Note that the MLE is a biased estimator.
Linear Regression. Our data is n observations with one explanatory variable and one response variable. The model is that , where the
are independent mean 0 normal random variable. The unknown variance is
. The likelihood function
.
.
This is the maximum likelihood estimators and
also the least square estimator. The predicted value for the response variable
. The MLE for variance is
. The unbiased estimator is
.
Asymptotic Properties
Much of the attraction of maximum likelihood estimators is based on their properties for a large sample size.
Consistency. If is the state of nature, then
, if and only if
.
By the strong law of large numbers, this sum converges to , which is greater than 0. From this, we obtain
as
. We call this property of the estimator consistency.
Asymptotic Normality and Efficiency. Under some assumptions that are meant to insure some regularity, a central limit theorem holds. Here we have converged in distribution as
to a normal random variable with mean 0 and variance
, the Fisher information for one observation. Thus
,
the lowest possible under the Cramer-Rao lower bound. This property is called asymptotic efficiency.
Properties of the log-likelihood surface. For the large sample size, the variance of an MLE of a single unknown parameter is approximately the negative of the reciprocal of the Fisher information
.
Thus, the estimate of the variance given data ,
,
the negative reciprocal of the second derivative, also known as the curvature, of the log-likelihood function evaluated at the MLE.
If the curvature is small, then the likelihood surface is flat around its maximum value. If the curvature is large and thus the variance is small, the likelihood is strongly curved at the maximum. For a multidimensional parameter space . Fisher information
is a matrix, the ij-th entry is
.