Stokastik

Machine Learning, AI and Programming

Category: AI

Understanding Variational AutoEncoders

This post is motivated from trying to find better unsupervised vector representations for questions pertaining to the queries from customers to our agents. Earlier, in a series of posts, we have seen how to design and implement a clustering framework for customer questions, so that we can efficiently find the most appropriate answer and at the same time find out most similar questions to recommend to the customer.

Continue Reading →

Designing a Question-Question Similarity Framework - Part III

In continuation of my earlier posts on designing a question-question similarity framework, in part three of the series we look into how to incorporate limited amount of supervised feedback into our system. Note that since getting labelled data is an expensive operation from the perspective of our company resources, the amount of supervised feedback from human agents is very low (~ 2-3% of the total number of questions). So obviously […]

Continue Reading →

Designing a Question-Question Similarity Framework - Part II

In this post we will look at the offline implementation. In our company, there are currently about a 100 manual agents, each serving somewhere around 60-80 customers (non-unique) a day, i.e. a total of about 8K customer queries each day for our agents. And each customer session has an average of 3 question-answer rounds including statements, greetings, contextual and personal questions. Thus on average we generate 24K QA pairs each […]

Continue Reading →

Designing a Question-Question Similarity Framework - Part I

Natural Language Question Answering system such as chatbots and AI conversational agents requires answering customer queries in an intelligent fashion. Many companies employ manual resources to answer customer queries and complaints. Apart from the high cost factor with employing people, many of the customer queries are repetitive in nature and most of the time same intents are asked in different tones.

Continue Reading →

Dynamic Programming in NLP - Longest Common Subsequence

In this second part of the series on my posts on Dynamic Programming in NLP, I will be showing how to solve the Longest Common Subsequence problem using DP and then use modified versions of the algorithm to find out the similarity between two strings. LCS is a common programming question asked in many technical interviews. Given two strings (sequence of words, characters etc.) S1 and S2, return the number […]

Continue Reading →

Dynamic Programming in NLP - Skip Grams

In this series of posts, I will be exploring and showing how dynamic programming technique is used in machine learning and natural language processing. Dynamic programming is very popular in programming interviews. DP technique is used mainly in problems which has optimal substructure and can be defined recursively, which means that a problem of size N can be solved by solving smaller sub-problems of size m < N. Such problems […]

Continue Reading →

Neural Networks as a Function Approximator

For the past few days, I have been reading quite a lot of research papers, articles and blogs related to artificial neural networks and its transition towards deep learning. With so many different methods of selecting the best neural network architecture for a problem, the optimal hyper-parameters, the best optimization algorithm and so on, it becomes a little overwhelming to connect all the dots together when we ourselves start to […]

Continue Reading →

Building a Neural Network from scratch in Python

In this post I am going to build an artificial neural network from scratch. Although there exists a lot of advanced neural network libraries written using a variety of programming languages, the idea is not to re-invent the wheel but to understand what are the components required to make a workable neural network. A full-fledged industrial scale neural network might require a lot of research and experimentation with the dataset. Building a simple […]

Continue Reading →

Using Word Vectors in Multi-Class Text Classification

Earlier we have seen how instead of representing words in a text document as isolated features (or as N-grams), we can encode them into multidimensional vectors where each dimension of the vector represents some kind semantic or relational similarity with other words in the corpus. Machine Learning problems such as classification or clustering, requires documents to be represented as a document-feature matrix (with TF or TF-IDF weighting), thus we need some […]

Continue Reading →

Understanding Word Vectors and Word2Vec

Quite recently I have been exploring the Word2Vec tool, for representing words in text documents as vectors. I got the initial ideas about word2vec utility from Google's code archive webpage. The idea behind coming up with this kind of utility caught my interest and later I went on to read the following papers by Mikolov et. al. to better understand the algorithm and its implementation. Efficient Estimation of Word Representations […]

Continue Reading →