In the last post, we saw how to sample random values from a target probability distribution (both with discrete as well as continuous distributions) using techniques like inverse CDF method, the transformation method and so on. All of the earlier discussed methods falls under the category of Monte Carlo techniques. In this post, we will be discussing some of the other advanced Monte Carlo techniques and their importance in the area of Machine Learning. We would be covering the following techniques : Rejection Sampling, Importance Sampling and Markov Chain Monte Carlo (Metropolis-Hastings algorithm and Gibbs sampling algorithm).

Many machine learning algorithms requires computing the expectation of a function over an exponential number of different configurations of the input variables, for example :

or for discrete distributions

where P(X) is the probability distribution of the input variable X. The expectation is over the entire domain of X, which can be pretty much expensive for problems involving input pixels in an image, bag of words in text documents etc. Can we sample N values from the distribution P(X) such that the expectation as computed with the sampled values of X, closely matches with the true expectation ?

Suppose, we sample from P(X) for i, 1 to N, then we can write the expectation as follows

Similarly in Bayesian Inference, if we need to compute the posterior probabilities as :

or

which requires computing the quantities or . Again we can sample N values of X, i.e. and use these values to approximate the above quantities as :

Similarly in problems like marginalization, optimization using maximum likelihood estimates etc., we can use sampled values from the input distribution to approximate quantities which are hard (expensive) to compute exactly.

Normally when we sample using built in uniform sampling libraries from any distribution, the sampled values loses the information about the original distribution and hence the approximate values of the computed quantities are way off. With Monte Carlo, no sacrifice in the model or in the solution is made.

#### Rejection Sampling

In rejection sampling, we sample from a target probability density function p(x), given that we can sample from a probability density function q(x) using some inbuilt library. The target density p(x) is not known but The idea is that, if M*q(x) forms an envelope over p(x) for some M > 1, i.e.

for all x

Then if we sample some from q(x), and if lies below the region under p(x) for some u ~ U(0,1), then accept else reject , i.e.

If accept else reject

Normally considering q(x) as the uniform distribution or the Gaussian distribution should work. The probability that the sample is accepted is proportional to , hence choosing M to be very high will reduce the probability of the getting accepted.

In the above diagram, the distribution shown in green is our target distribution i.e. p(x) from where we need to sample, and the red plot shows the distribution that is available to sample from, i.e. M*q(x).

p(x)=N(x; 1, 2)+N(x; 10, 3) and q(x)=N(x; 5, 5)

In the above diagram, the envelope (shown in red) is 5*q(x), i.e. M=5, in our algorithm.

Below is the R code that does the sampling given the number of samples to generate.

rejection.sample <- function(n) { i <- 0 out <- c() while (i < n) { xi <- rnorm(1, 5, 5) u <- runif(1) if (u < (p(xi)/(5*dnorm(xi, 5, 5)))) { out <- c(out, xi) i <- i + 1 } } out }

We generated 100K samples with the above rejection sampling module and plot the histogram of the generated samples :

As you see that the envelope of the histogram from the sampled data resembles the probability density p(x), (shown in green above) with the peaks at around 1 and 10 respectively in both.

Since it could be difficult to choose the density q(x) and the constant M, such that M*q(x) forms an envelope over the target density p(x), and also choosing a value of M not very high in order to reduce the probability of rejection, there is an adaption version of rejection sampling.

In adaptive rejection sampling we update the envelope M*q(x) depending on whether we accept or reject in each iteration. We start with some normal distribution q(x) as above and choose an arbitrary large value of M.

Then randomly sample a few 's from q(x). For each , check if it is accepted or rejected as above. If it is rejected, then construct tangents at . Now the new envelope function is the intersection of all the tangents on . From here on, for any , if it is rejected then we again update the envelope by finding the intersections of the tangents all over again and so on. As you see that this method has lots of drawbacks, as it works only if is a concave function and also the method needs to update the envelope function every-time an is rejected by computing the intersections all over again.

#### Importance Sampling

Importance Sampling is Monte Carlo sampling technique used primarily for computation of expectation of a function f(x), i.e.

Normally, using a sampling technique as earlier, we would sample from the target distribution p(x) and use them to compute the approximate expected value as :

But given that in the region where f(x) is defined, the density p(x) is close to zero (tail of a distribution etc.), then we may not obtain any sample at all from the region of our interest and thus the approximate expectation is far from close to the actual expectation. In such scenarios, we sample from some other known density q(x) which is significant compared to p(x) in the region where f(x) is defined. But in order to identify that we are sampling from a region where our target density p(x) is low, we assign low weights to these sampled values during expectation calculations. Conversely, regions where q(x) is lower than p(x) (where f(x) is defined), the weights for samples should be high.

After we sample , we assign the following weights to the samples :

which obeys our above requirements. And the new expectation calculation is done as follows

The reason this works is because :

where

Although the method is primarily used for computing the expectation of f(x), but we can always use this method to obtain samples from p(x). In order to obtain samples, we use the Sampling Importance Sampling (SIR) algorithm. In SIR, the weights are normalized first :

Then each obtained above, is sampled with replacement with the probabilities . Below is the R code to do SIR. We have used the functions p(x) same as in rejection sampling and used q(x) to be N(10, 5).

sir <- function(n) { x <- rnorm(n, 10, 5) weights <- sapply(x, function(xi) p(xi)/dnorm(xi, 10, 5)) probs <- weights/sum(weights) sample.custom(n, probs, x) }

where the "sample.custom" is the sampling function for discrete distribution defined in the earlier blog post. Following is the histogram of the obtained samples :

which looks similar to the original distribution p(x) as expected. It is important to choose the known function q(x) carefully. If instead of choosing q(x) to be N(10, 5), we choose q(x) to be N(-5, 5), then we obtain the below histogram.

#### Markov Chain Monte Carlo (MCMC)

Importance sampling as seen above might work well in lower dimensions for computing the expectation of f(x), but in higher dimensions, the regions of interest for f(x), might have even smaller densities and finding a proposal density q(x) without knowing the values of p(x) in the regions of interest is very difficult. This is where MCMC techniques comes into play. In standard sampling techniques, the samples are independent of each other, whereas MCMC generates dependent samples.

Due to the Markov property, each sample is only dependent on the previous sample . Thus in general, in order to sample from p(x) we sample from the conditional probabilities of q(x) :

The idea here is to construct a Markov chain of samples as shown above. The assumption here is that after "enough" samples has been generated, the Markov chain reaches a stationary state and all samples generated beyond this point, reflects the target distribution p(x). The time taken to reach this state is called the "burn-in" period.

You can think of the Markov chain as a directed graph. In a Markov process, a random variable X can take 's' different states . For example, if X is the variable describing the financial market condition, then the possible states could be "Bull", "Bear", "Stagnant" etc. At any particular instance of time, each state can either remain in the same state or transition to a different state. The state of the variable X at time t, is denoted as .

The probability of transitioning to the state at time t, is only dependent on the previous state at (t-1), i.e. :

The probabilities are known as the transition probabilities. The chain is homogeneous if the transition probabilities are independent of time, i.e.

The probabilities can then be represented as a transition matrix K,

, and

Let denote the starting probability distribution of all the states (it's a vector of probabilities). Then at time t, the probability distribution of the states is given as :

is the matrix multiplication of the vector and the matrix K.

Given that if the graph of the Markov chain is connected (irreducible), the chain is not trapped in cycles (aperiodic) and can reach any state from any other state in finite number of steps (ergodic), then the state probability distributions converges to an equilibrium, i.e. at equilibrium, for all t.

For example, with a 3-state Markov process, let the starting probability distribution of being in each state is and the transition probabilities :

Then,

At equilibrium, the probabilities converges to p(x)=(0.22, 0.41, 0.37) which is the stationary state we are interested in. A sufficient, but not necessary, condition to ensure that a particular p(x) is the desired invariant distribution is the following reversibility (detailed balance) condition :

or taking the sum over all possible states at time (t-1), we get :

We need to design our Markov chain, such that it satisfies the above balancing condition and also converge to the stationary distribution p(X) quickly. We will look into two algorithms for MCMC sampling, first is the Metropolis-Hastings algorithm and second one is the Gibbs sampler.

Metropolis-Hastings Algorithm :

- Initialize randomly
- For i from 1 to N, repeat steps 3 to 6
- Sample
- Sample
- Assign acceptance probability
- If u < A, then else

In our previous example with the target distribution p(x)=N(x; 1, 2) + N(x; 10, 3), we initialize , we consider the following the proposal distribution :

where is the normal distribution with mean at the previously sampled value .

Below is the R code, for sampling from the above p(x) and q(x).

metropolis.hastings <- function(n, p) { old.xi <- 0 out <- c() i <- 1 while (i <= n) { u <- runif(1) new.xi <- rnorm(1, old.xi, 5) accept <- min(c(1, (p(new.xi)*dnorm(old.xi, new.xi, 5))/(p(old.xi)*dnorm(new.xi, old.xi, 5)))) if (u < accept) { out <- c(out, new.xi) old.xi <- new.xi } else out <- c(out, old.xi) i <- i+1 } out }

On sampling using the M-H algorithm, we obtained the following histogram of the samples :

with peaks at 1 and 10 as expected.

In the above example we used a normal distribution as the proposal distribution q(x). For the normal distribution , thus the acceptance probability reduces to :

This version is known as the Metropolis algorithm.

Choosing the proper proposal distribution is very critical. If the standard deviation of the chosen normal distribution for q(x) is too low, then it might model only one of the modes of p(x) (sum of two normal distribution) well and if the deviation is too high then there could be high number of rejections i.e. A would be very low for most samples.

Above is the histogram of the sampled values but with a proposal density . As you see that only one mode (N(x; 1, 2)) is modeled properly.

Gibbs Sampler :

Gibbs sampling is a special case of the Metropolis-Hastings algorithm. It is specifically designed for sampling from probability distributions in very high dimensions e.g. image pixels, bag of words in texts etc. In our earlier post on Feature reduction using Restricted Boltzmann machines, the values at the visible and the hidden nodes in the contrastive divergence algorithm step are sampled using the Gibbs sampling technique. Below is the algorithm for Gibbs sampling :

- Sample
- For i from 1 to N, repeat below steps until convergence
- ...

where is an 's' dimensional random variable. Note that inside each iteration, the update for considers the latest update for as a conditional, considers both the latest updates of and as conditionals.

Categories: MACHINE LEARNING, PROBLEM SOLVING

Tags: Gibbs Sampling, MCMC, Metropolis Hastings, Monte Carlo Sampling, Rejection Sampling