• Blog
  • How does Natural Language Understanding (NLU) work?

How does Natural Language Understanding (NLU) work?

Last updated 20 October 2023
Technology

In order to help someone, you have to first understand what they need help with. Machine learning can be useful in gaining a basic grasp on underlying customer intent, but it alone isn’t sufficient to gain a full understanding of what a user is requesting.

Machine learning is often mistakenly used as an umbrella term for the ‘AI’ in conversational AI. This is inaccurate. At boost.ai, for instance, we use a combination of machine learning and deep learning, with more than six deep learning models running in the backend.

A typical machine learning model for text classification, by contrast, uses only term frequency (i.e. the number of times a particular term appears in a data corpus) to determine the intent of a query. Oftentimes, these are also only simple and ineffective keyword-based algorithms.

Deep learning and automatic semantic understanding

At boost.ai, we don’t use term frequencies. Instead, we use a mixture of LSTM (Long-Short-Term-Memory), GRU (Gated Recurrent Units) and CNN (Convolutional Neural Networks). The advantage of using this combination of models - instead of traditional machine learning approaches - is that we can identify how the words are being used and how they are connected to each other in a given sentence. In simpler terms; a deep learning model will be able to perceive and understand the nuances of human language.

It doesn’t stop there, however. On top of these deep learning models, we have developed a proprietary algorithm called ASU (Automatic Semantic Understanding). ASU works alongside the deep learning models and tries to find even more complicated connections between the sentences in a virtual agent’s interactions with customers.

A rock-solid natural language foundation

In addition to machine learning, deep learning and ASU, we made sure to make the NLP (Natural Language Processing) as robust as possible. It consists of several advanced components, such as language detection, spelling correction, entity extraction and stemming - to name a few. This foundation of rock-solid NLP ensures that our conversational AI platform is able to correctly process any questions, no matter how poorly they are composed.

An example of why this distinction matters:

- I canceled my trip and I want the ticket refunded

- You have canceled my trip, I assume the ticket gets refunded?

In the first sentence ‘I’, ‘canceled’, ‘trip’, ‘ticket’ and ‘refunded’ will be marked as important. In the second sentence ‘you’, ‘canceled’, ‘trip’, ‘ticket’ and ‘refunded’ will be marked as important.

Both ‘you’ and ‘I’ in the above sentences are known as stopwords and will be ignored by traditional algorithms. This will render the sentences essentially the same and unhelpful. Deep learning models (without the removal of stopwords) understand how these words are connected to each other and can, therefore, infer that the sentences are different.

The difference may be minimal for a machine, but the difference in outcome for a human is glaring and obvious. In the examples above, where the words used are the same for the two sentences, a simple machine learning model won’t be able to distinguish between the two. In terms of business value, automating this process incorrectly without sufficient natural language understanding (NLU) could be disastrous.

Learning to speak ‘human’

By understanding which words are important in a given context, ASU is able to figure out the potential mistakes made by deep learning models (if any) and can correct it (as long as the training data quality is sufficient). It’s an extra layer of understanding that reduces false positives to a minimum.

In short: our combination of machine learning, deep learning, ASU and NLP forms the most robust NLU imaginable and can easily decode and handle complex nuances to human language, as illustrated in the examples above. Thus, there remain some NLU and NLP challenges s now facing and solving!