Stokastik

Machine Learning, AI and Programming

Category: MACHINE LEARNING

Using KD-Tree For Nearest Neighbor Search

This post is branched from my earlier posts on designing a question-question similarity system. In the first of those posts, I discussed the importance of speed of retrieval of most similar questions from the training data, given a question asked by a user in an online system. We designed few strategies, such as the HashMap based retrieval mechanism. The HashMap based retrieval assumes that at-least one word between the most […]

Continue Reading →

Designing an Automated Question-Answering System - Part II

In this post we will look at the offline implementation architecture. Assuming that, there are currently about a 100 manual agents, each serving somewhere around 60-80 customers (non-unique) a day, i.e. a total of about 8K customer queries each day for our agents. And each customer session has an average of 5 question-answer rounds including statements, greetings, contextual and personal questions. Thus on average we generate 40K client-agent response pairs […]

Continue Reading →

Designing an Automated Question-Answering System - Part I

Natural Language Question Answering system such as chatbots and AI conversational agents requires answering customer queries in an intelligent fashion. Many companies employ manual resources to answer customer queries and complaints. Apart from the high cost factor with employing people, many of the customer queries are repetitive in nature and most of the time, same intents are asked in different tones.

Continue Reading →

Dynamic Programming in NLP - Longest Common Subsequence

In this second part of the series on my posts on Dynamic Programming in NLP, I will be showing how to solve the Longest Common Subsequence problem using DP and then use modified versions of the algorithm to find out the similarity between two strings. LCS is a common programming question asked in many technical interviews. Given two strings (sequence of words, characters etc.) S1 and S2, return the number […]

Continue Reading →

Dynamic Programming in NLP - Skip Grams

In this series of posts, I will be exploring and showing how dynamic programming technique is used in machine learning and natural language processing. Dynamic programming is very popular in programming interviews. DP technique is used mainly in problems which has optimal substructure and can be defined recursively, which means that a problem of size N can be solved by solving smaller sub-problems of size m < N. Such problems […]

Continue Reading →

Neural Networks as a Function Approximator

For the past few days, I have been reading quite a lot of research papers, articles and blogs related to artificial neural networks and its transition towards deep learning. With so many different methods of selecting the best neural network architecture for a problem, the optimal hyper-parameters, the best optimization algorithm and so on, it becomes a little overwhelming to connect all the dots together when we ourselves start to […]

Continue Reading →

Building a Neural Network from scratch in Python

In this post I am going to build an artificial neural network from scratch. Although there exists a lot of advanced neural network libraries written using a variety of programming languages, the idea is not to re-invent the wheel but to understand what are the components required to make a workable neural network. A full-fledged industrial scale neural network might require a lot of research and experimentation with the dataset. Building a simple […]

Continue Reading →

Building an Incremental Named Entity Recognizer System

In the last post, we saw how to train a system to identify Part Of Speech tags for words in sentences. In essence we found out that discriminative models such as Neural Networks and Conditional Random Fields, outperforms other methods by 5-6% in prediction accuracy. In this post, we will look at another common problem in Natural Language Processing, known as the Named Entity Recognition (NER in short). The problem […]

Continue Reading →

Building a POS Tagger with Python NLTK and Scikit-Learn

In this post we are going to understand about Part-Of-Speech Taggers for the English Language and look at multiple methods of building a POS Tagger with the help of the Python NLTK and scikit-learn libraries. The available methods ranges from simple regular expression based taggers to classifier based (Naive Bayes, Neural Networks and Decision Trees) and then sequence model based (Hidden Markov Model, Maximum Entropy Markov Model and Conditional Random […]

Continue Reading →

Understanding Conditional Random Fields

Given a sequence of observations, many machine learning tasks require us to label each observation in the sequence with a corresponding class (or named entity) such that the overall likelihood of the labelling is maximized. For example, given a english sentence, i.e. a sequence of words, label each word with a Part-Of-Speech tag, such that the combined POS tag of the sentence is optimum. "Machine Learning is a field of […]

Continue Reading →