Stokastik

Machine Learning, AI and Programming

Category: MACHINE LEARNING

Designing an automated Question-Answering System - Part IV

In the second post of this series we had listed down different vectorization algorithms used in our experiments for representing questions. Representations form the core of our intent clusters, because the assumption is that if a representation algorithm can capture syntactic as well as semantic meaning of the questions well, then if two questions which actually speak of the same intent, will have representations that are very close to each […]

Continue Reading →

Designing a Social Network Site like Twitter

In this post we would be looking at designing a social networking site similar to Twitter. ¬†Quite obviously we would not be designing every other feature on the site, but the important ones only. The most important feature on Twitter is the Feed (home timeline and profile timeline). The feeds on twitter drives user engagement and thus it needs to be designed in a scalable way such that it can […]

Continue Reading →

Understanding Variational AutoEncoders

This post is motivated from trying to find better unsupervised vector representations for questions pertaining to the queries from customers to our agents. Earlier, in a series of posts, we have seen how to design and implement a clustering framework for customer questions, so that we can efficiently find the most appropriate answer and at the same time find out most similar questions to recommend to the customer.

Continue Reading →

Designing an Automated Question-Answering System - Part III

In continuation of my earlier posts on designing an automated question-answering system, in part three of the series we look into how to incorporate feedback into our system. Note that since getting labelled data is an expensive operation from the perspective of our company resources, the amount of feedback from human agents is very low (~ 2-3% of the total number of questions). So obviously with such less labelled data, […]

Continue Reading →

Using KD-Tree For Nearest Neighbor Search

This post is branched from my earlier posts on designing a question-question similarity system. In the first of those posts, I discussed the importance of speed of retrieval of most similar questions from the training data, given a question asked by a user in an online system. We designed few strategies, such as the HashMap based retrieval mechanism. The HashMap based retrieval assumes that at-least one word between the most […]

Continue Reading →

Designing an Automated Question-Answering System - Part II

In this post we will look at the offline implementation architecture. Assuming that, there are currently about a 100 manual agents, each serving somewhere around 60-80 customers (non-unique) a day, i.e. a total of about 8K customer queries each day for our agents. And each customer session has an average of 5 question-answer rounds including statements, greetings, contextual and personal questions. Thus on average we generate 40K client-agent response pairs […]

Continue Reading →

Designing an Automated Question-Answering System - Part I

Natural Language Question Answering system such as chatbots and AI conversational agents requires answering customer queries in an intelligent fashion. Many companies employ manual resources to answer customer queries and complaints. Apart from the high cost factor with employing people, many of the customer queries are repetitive in nature and most of the time, same intents are asked in different tones.

Continue Reading →

Dynamic Programming in NLP - Longest Common Subsequence

In this second part of the series on my posts on Dynamic Programming in NLP, I will be showing how to solve the Longest Common Subsequence problem using DP and then use modified versions of the algorithm to find out the similarity between two strings. LCS is a common programming question asked in many technical interviews. Given two strings (sequence of words, characters etc.) S1 and S2, return the number […]

Continue Reading →

Dynamic Programming in NLP - Skip Grams

In this series of posts, I will be exploring and showing how dynamic programming technique is used in machine learning and natural language processing. Dynamic programming is very popular in programming interviews. DP technique is used mainly in problems which has optimal substructure and can be defined recursively, which means that a problem of size N can be solved by solving smaller sub-problems of size m < N. Such problems […]

Continue Reading →