Observations from a probability distribution, depends on the parameters of that model. For example, given an unbiased coin with equal probability of landing heads as well as tails, what is probability of observing the sequence "HHTH". Our knowledge from probability theory says that since the toss of a coin follows the binomial distribution, the probability of the observation should be 0.54 = 0.0625, but what if the coin was biased and the probability of landing heads was 0.7 (and 0.3 for tails) ? The answer would be 0.103.
Thus the probability of the observation is a function of the probability of heads (or tails). Lets denote that probability as P(O; p) where 'p' is the probability of heads,
P(O=HHTH; p=0.5) = 0.0625 and P(O=HHTH; p=0.7) = 0.103.
But in more common scenarios, given the observation, we need to estimate the probabilities. The aim of maximum likelihood estimation is to find the parameter value(s) that makes the observed data most likely, or in other words given the observation 'O', what should be the value of 'p' so that P(O; p) is maximum.
Continuing with our coin tossing example, let's suppose we do not know the value of 'p' and we have to estimate it given the observation 'HHTH'. Since the probability of heads is 'p', hence the probability of tails is (1-p). From binomial distribution, the likelihood of the observed sequence as a function of 'p' is :
Our goal is to maximize this quantity.
There are many possible ways the above function can be maximized. If the function is too complex then we normally use numerical algorithms to estimate the values of the parameters. But for simple functions like the one above, we can find a solution by setting the first derivative of the likelihood w.r.t the unknown parameters. Sometimes the function may be having local minima and maxima and hence the solution obtained may not be the globally correct answer.
Since we are dealing with probabilities (real numbers between 0 and 1), raising them to exponents and taking products will result in integer underflow. We can overcome that by taking Log Likelihood of the function, i.e.
Taking the derivative of the above quantity w.r.t. 'p' and setting it to zero, gives us : , solving for p gives us, p=3/4=0.75. The answer seems pretty intuitive and one might wonder why did we go through the pain of computing derivatives and roots. True. But the point was to illustrate that the approach is generic.
Lets look at a more complex example with Normal distribution.
Given a sequence of identical and independent observations X1, X2, ..., Xn from a normal distribution with unknown parameters and , estimate the parameters so as to fit the distribution to the observation.
The Gaussian distribution is defined as
Hence the probability of the sequence of observations is the product of the probabilities of each observation (since the observations are I.I.D.), thus :
Taking the log likelihood of the above, we get :
To maximize the above function, we will use the same technique as earlier i.e. take the derivatives w.r.t. the unknown parameters and find their roots. On taking the derivative w.r.t. we get :
solving for we get,
i.e. the mean of the observations. Similarly on taking derivative w.r.t. , we get the following :
solving for , we get
which is the standard deviation of the observations.
Categories: MACHINE LEARNING