Likelihood
Edward A. Roualdes
Contents
Introduction
The likelihood method estimates parameters using random variables from an assumed distribution. To estimate the parameters, standard methods of calculusApex Calculus I is a great reference if you need an Open Educational Resource (free) resource to review derivatives, the calculus behind maximization andminimization. are employed, although derivatives are not taken with respect to standard variable names, e.g. . The goal is to find the most likely values of the parameter(s) given a set of random variables. As such the best guess of the parameter(s) derived from this method is called the maximum likelihood estimate.
The logic underlying the likelihood method goes like this. To estimate an unknown parameter given a set of random variables , first set up the likelihood function. Then, by treating the parameter as the argument to the likelihood functon, find the value that maximizes the likelihood function. The solution, denoted will be a function of the random variables and is called the maximum likelihood estimator. Often, this is written as to denote that the estimate is the maximal argumentDon't let the new notation stand in your way. Consider an example that you can reason about somewhat easily. What is the argument that maximizes the function ? to the likelihood function for the random variables . The calculus is then left to the practioner.
This page aims to provide a short introduction to the intuition behind the likelihood function and to show a common analytical strategyNumerical methods on a computer are often employed for more complex problems, where analytical solutions are too difficult for finding the maximum likelihood estimator.
Intuition
The likelihood function is defined relative to the density function of the distribution that is assumed to have generated the random variables. The likelihood is defined to be the product of the density function evaluated at each random variable. We think of the likelihood function as a function of the parameter(s) of interest, here generalized to , given the random variables .
The intuition behind the product of the density function comes from (the assumed) independence of the random variables. Imagine you have observations from a fair coin, H, H, T, H. The probability associated with this outcome is .
Now, imagine that you don't know that the coin is fair, instead the probability of heads is just . The probability You'd be on the right track if you're imagining that in four flips, three heads and one tail might suggest that . is rewritten as
Next, write this probability using the density function of a Bernoulli distributionSee the notes on the Bernoulli distribution if you need a quick refresher. Since we map heads to and tails to , we have
The last step in understanding the setup of the likelihood function is to recognize that until we observe, say, H, H, T, H, we might as well treat these observations as random variables, . In this case the functional form is
The discussion above captures the intuition behind the setup of the likelihood function. From here the main differences are notational and a conceptual understanding of how we can treat this product as a function of the unknown parameter .
To get from to the formal definition of the likelihood function, we generalize the unknown parameter to , thinking that this method should apply to any distribution's density function, and we use product notation, which is analogous to summation notation, to expand the sample to any number of random variables
Once we observe the outcomes of these random variables, their values are bound to specific numbers. Even so, the parameter is not is still unknown. The conceptual jump of the likelihood function is to treat the form as a function of the unknown parameter .
By putting first, the notation implies that the argument to the likelihood function is the unknown parameter and the random variables are held fixed at whatever values they might eventually be bound to. If a likelihood function maps one sample to more than one values of , we call the parameters unidentifiable. The specific value of that maximizes the likelihood function is the best guess of the unknown parameter. The value is called the maximum likelihood estimate of , where the hat over is used to denote a best guess of the unknown value of .
In general, will be some function of random variables. Once the random variables are observed and thus bound to some values, we can plug in these data to the function and get out an actual number that represents a best guess of the unknown parameter .
To bring the general likelihood function back down to earth, consider the following plot depicting the scenario introduced above: the observations H, H, T, H from a Bernoulli distribution with unknown population parameter . From exactly these four observations, (H, H, T, H) represented as (1, 1, 0, 1), the argument that maximizes the likelihood function for random variables from the Bernoulli distribution is .
TODO remake plotMaximization
TODO give user ability to generate data, see the likelihood function from the data, and see a new number dependent on those data.Suppose you have a sample of observations random variables from the Rayleigh distribution. The Rayleigh depends on the parameter , which can be estimated from some data (observed random variables). The density function of the Rayleigh distribution is for and .
To find the maximum likelihood estimate,First step, write out the likelihood function. start by writing out the likelihood function.
The goal is to find the value that maximizes the likelihood function . Both humans and computers have difficulty working with products and exponents of functions. Therefore, it is common take the natural logNext, find the log-likelihood, . of the likelihood function. This is so common, the log of the likelihood function is often just referred to as the log-likelihood function. We'll denote this function .
Recall from calculus that we can find local maxima/minima by differentiating a function, setting the derivative equal to zero, and solving for the variable of interest. In this scenario, the variable of interest is the unknown parameter .
It is helpful to simplify the log-likelihood function before differentiating. The log-likelihood simplifies as
Proceed by taking the derivativeTake the derivative of the simplified log-likelihood with respect to the unknown parameter., with respect to the unknown parameter, of the simplified log-likelihood.
Set the derivative equal to zeroSet the derivative equal to zero and solve. and solve for the unknown parameter.
Collecting s on the left hand side yields,
Manipulate the expression until you find a solution for the parameter of interest. At this point, we put a hat over the parameter to recognize that it is our best guess of the parameter given the set of random variables .
The maximum likelihood estimatePut a hat on your solution to formally note that this is your best guess based on the data. is the final solution. With data from a Rayleigh distribution, we could use the function defined by to plug in the observed values for the random variables to find an estimate for the parameter .
This work is licensed under the Creative Commons Attribution 4.0 International License.