Natural language processing (NLP) has made great strides in recent years. A notable advance in this area is BERT, a pre-trained language model released by Google in 2018. With BERT, text data can be analyzed to state-of-the-art results with relative ease. But how well does BERT understand fedspeak, the notoriously vague language used by central bankers? In this article, we explore BERT and fedspeak.
In this article, we apply BERT to fedspeak by analyzing a Federal…
We are surrounded by large volumes of text — emails, messages, documents, reports — and it’s a challenge for individuals and businesses alike to monitor, collate, interpret and otherwise make sense of it all. Over recent years, an area of natural language processing called topic modeling has made great strides in meeting this challenge. This article introduces topic modeling — how it works and what it’s used for — through an intuitive explanation of a popular topic modeling approach called Latent Dirichlet Allocation.
The volume of text that surrounds us is vast. And it’s growing.
Emails, web pages, tweets, books…
Soft skills are in demand, and improving your soft skills can boost your career prospects and personal wellbeing — best of all, you can learn soft skills online for free through flexible, engaging and self-paced courses.
That this article contains affiliate links. If you buy through these links, I may receive a small commission at no extra cost to you. I only suggest products or services that I have personally used or have otherwise vetted.
Soft skills are becoming more and more important in today’s workplace as employers recognize the value that they provide.
They’re at the heart of what…
Counterfactual analysis (or counterfactual thinking) explores outcomes that did not actually occur, but which could have occurred under different conditions. It’s a kind of what if? analysis and is a useful way for testing cause-and-effect relationships.
Consider deciding which road to take driving home. You take Right Ave and encounter lots of traffic. But you could have taken Left Ave and had less traffic.
The outcome-less traffic-did not actually occur but could have occurred if you had taken a different road.
This is an example of a counterfactual, and in this case helps to test the causal relationship between the…
Topic modeling is a form of unsupervised learning.
It’s a branch of natural language processing that’s used for exploring unstructured data, typically text.
Topic modeling can be applied directly to the data being analyzed. It does not require labeled data or pre-training for its learning algorithm. This is why it is a form of unsupervised learning.
Being unsupervised, topic modeling is useful when annotated (labeled) data isn’t available. This is a major advantage of topic modeling, as most of the data that we encounter isn’t labeled, and labeling is time-consuming and expensive to do.
Unsupervised learning refers to learning directly…
LDA topic modeling is topic modeling that uses a Latent Dirichlet Allocation (LDA) approach.
Topic modeling is a form of unsupervised learning. It can be used for exploring unstructured text data by inferring the relationships that exist between the words in a set of documents.
LDA is a popular topic modeling algorithm. It was developed in 2003 by researchers David Blei, Andrew Ng and Michael Jordan. It has grown in popularity due to its effectiveness and ease-of-use, and can easily be deployed in coding languages such as Java and Python.
LDA discovers hidden, or latent, topics in a set of…
A parole incorrectly denied. A weather report omitting harmful wild fire smoke. A saliency map confusing huskies with flutes. What’s going on? Black-box AI models are changing our world. But they’re also testing our tolerance for errors when outcomes matter. For high-stakes decisions — in legal, financial or health contexts, for instance — when things go wrong, we need to understand why. Explainable AI is leading the charge in explaining the way AI systems work, and making them more transparent, trustworthy and reliable as a result. …
Topic models are widely used for analyzing unstructured text data, but they provide no guidance on the quality of topics produced. Evaluation is the key to understanding topic models. In this article, we’ll look at what topic model evaluation is, why it’s important and how to do it.
Topic modeling is a branch of natural language processing that’s used for exploring…
Bayes’ theorem is a widely used statistical technique. Its applications range from clinical trials to email classification and its concepts underpin a range of broader use-cases. But is it possible to understand Bayes’ theorem without getting bogged down in detailed math or probability theory? Yes, it is — this article shows you how.
Bayes’ theorem is named after the English statistician and Presbyterian minister, Thomas Bayes, who formulated the theorem in the mid 1700’s. Unfortunately, Bayes never lived to see his theorem gain prominence, as it was published after his death.
Artificial Intelligence (AI) is a term you’ve probably heard before — it’s having a huge impact on society and is widely used across a range of industries and applications. So, what exactly is AI and what can it do? Here’s a straightforward introduction.
Artificial Intelligence (AI) is a part of our daily lives — from language translation to medical diagnostics and driverless cars to facial recognition — it’s making more of an impact on industry and society every day.
But what exactly is AI?
Simply put, AI is a technology that replicates human intelligence through computers, systems or machines.
Analyst | Machine Learning Enthusiast | Blogger