Stokastik

Machine Learning, AI and Programming

Category: PROBLEM SOLVING

Convex Optimization in Machine Learning

In an earlier post, we introduced one of the most widely used optimization technique, the gradient descent and its scalable variant, the Stochastic Gradient Descent. Although the SGD is an efficient and scalable technique to optimize a function, but the drawbacks with both gradient descent and SGD is that they are susceptible to find local optimum. The gradient descent technique is not suited to find local or global optimum with […]

Continue Reading →

Selecting the optimum number of clusters

Clustering algorithms comes with lots of challenges. For centroid based clustering algorithms like K-Means, the primary challenges are : Initialising the cluster centroids. Choosing the optimum number of clusters. Evaluating clustering quality in the absence of labels. Reduce dimensionality of data. In this post we will focus on different ways of choosing the optimum number of clusters. The basic idea is to minimize the sum of the within cluster sum […]

Continue Reading →

Text Classification with Adaboost

Boosting is a general technique by which multiple "weak" classifiers are combined to produce a "super strong" single classifier. The idea behind boosting technique is very simple. Boosting consists of incrementally building a final classifier from an ensemble of classifiers in a way such that the next classifier chosen should be able to perform better on training instances that the current classifier is not able to do.

Continue Reading →