The large number of reviews written every day by customers, is mostly ignored by brands today. This is unfortunate, since they invest in hearing from their customers and want to get closer to them.
In the absence of proper tools though, this vast amount of valuable information is ignored, and remains unprocessed.
For example, given the 100,000 reviews about product X, what are the most common things that customers are saying? Which features are most popular? Which are hated most? How do they compare with the competition?
A range of AI and NLP algorithms which go well beyond traditional text analysis, can be adapted to read customers reviews. Here’s a short overview.
1. Topic Modeling
Topic Modeling extracts the key topics expressed by a user in a review. Algorithms like LDA (Latent Dirichlet Allocation) have been around for a while, which help with topic extraction.
Newer advances like MG-LDA (Multi-grain LDA) go deeper and extract the relationship hierarchy among the topics as well. For instance, you can obtain a relationship that megapixels is a sub-topic of the topic ‘camera’.
Beyond LDA, NTM (Neural Topic Modeling) utilizes neural networks to figure out topics. Unsupervised aspect extraction using the Neural Attention Model, helps us in distinguishing aspects which are expressed too close (in a sentence) but need to be considered differently. An example is, “Battery and camera are both good”, generating the topics “battery” and “camera”.
Once the topics from a set of reviews have been extracted, the next most important part is to cluster them together, so that repetition is avoided and aggregations make sense.
For instance, “money” and “price” are topics that generally mean the same thing, so they need to be clustered together. In order to cluster them together, they can be first converted to word vectors using an algorithm like word2vec, and then the vector representations can be clustered using various clustering algorithms.
Note that this is an NP-hard problem, so we can only get a reasonable approximation in a finite time. k-means clustering is one such algorithm, which does iterative refinement using Euclidean distance to find centroids in the vectors.
Newer approaches like the eponymous Neural Clustering, use neural methods to accomplish this. A challenge here, is ascertaining which features can be used to augment the vectors that get clustered, and this throws up many exciting possibilities — e.g. lda2vec, which deals with vectors derived not just from words, but from sentences, paragraphs and indeed, the whole document.
PoS (parts of speech) data is another key augmentation for the vectors, which is a recent advancement. As clustering deals primarily with vectors, applications of GPUs and Tensorflow libraries tremendously help accelerate these algorithms.
The third step, is to then score these aggregated topics using sentiment scoring and adding parameters like temporal decay, and signals like verified user, expert opinions, helpfulness and so forth.
While sentiment analysis has been around for a long time, topic-based sentiment analysis still remains a challenge, and that is where major focus lies nowadays. ABSA (Aspect Based Sentiment Analysis) is an area of active research. Here, too, neural methods aim to change the game.
Besides, for sentiment analysis in general, fast training using newer algorithms like ULMFit (Universal Language Modeling Fine Tuning), is helping achieve higher accuracy with a smaller data-set, and with faster training times. It creates other exciting classification possibilities as well, allowing us to analyze in-depth, as models can be built with significantly reduced effort (both human and machine).
4. Out-of-vocabulary Language Learning
Another upshot of these approaches is that it is now possible to build a dictionary, which is specific to how people write reviews.
This throws up new ‘knowledge’ for our machines, which is of the form that ‘paisa’ (hinglish) and ‘money’ mean the same thing, or that 💰 and ‘price’ are actually the same thing. This is an incredibly valuable learning, allowing us to understand reviews in-depth.
Note that this is not the same as Machine Translation. However, even Neural Machine Translation (NMT) approaches are being tested to understand complex out-of-vocabulary signals like emojis.
As the number of people writing reviews grows, the vocabulary size expands in unpredictable ways, and this is crucial for machines to ‘keep up’ with changing writing styles as well.
5. Continuous Learning
The holy grail of all techniques is to continuously learn from all user input, and automatically evaluate and improve algorithms, so that increasing analyses can be done with fewer customizations.
This automatically means a bias towards unsupervised methods, and use of vertical-agnostic data allowing scalable approaches. Projects like NELL (Never Ending Language Learning), are always expanding their knowledge automatically, based on what is being ‘read’ by the machines (with minimal or no human inputs).
6. Beyond the 5-star Review Box
Traditionally, customers are shown a 5-star rating scale and a text box to collect reviews. However, having learnt what other customers are saying about the products, it is now possible to come up with better questions to ask subsequent users.
This increases engagement, response rate and ensures that the knowledge content of the review data increases over time, as opposed to remaining stagnant.
Companies like Uber, Grubhub, Foursquare and many others, have started using this approach to collect significantly better data from their users.
Manually, it takes nearly 2 months to read 100,000 reviews, and the inferences can be questionable, even after all that effort. Bewgle applies all of the above approaches to provide a markedly better experience for brands to consume, analyze and make sense of reviews, at scale.
Structuring the unstructured voices of consumers is a hard problem. but one that excites us and inspires us. Our team comprises of ex-Googlers, ex-Cornell graduates, ex-IISc machine learning experts and is funded by the Techstars SAP.io program.