Using NLP and BERT to analyze FOMC transcripts

Photo by Alex Knight on Unsplash

Natural language processing (NLP) has made great strides in recent years. A notable advance in this area is BERT, a pre-trained language model released by Google in 2018. With BERT, text data can be analyzed to state-of-the-art results with relative ease. But how well does BERT understand fedspeak, the notoriously vague language used by central bankers? In this article, we explore BERT and fedspeak.


In this article, we apply BERT to fedspeak by analyzing a Federal…


A practical introduction to analyzing unstructured text data using Latent Dirichlet Allocation

Photo by Delaney Van on Unsplash

We are surrounded by large volumes of text — emails, messages, documents, reports — and it’s a challenge for individuals and businesses alike to monitor, collate, interpret and otherwise make sense of it all. Over recent years, an area of natural language processing called topic modeling has made great strides in meeting this challenge. This article introduces topic modeling — how it works and what it’s used for — through an intuitive explanation of a popular topic modeling approach called Latent Dirichlet Allocation.

The volume of text that surrounds us is vast. And it’s growing.

Emails, web pages, tweets, books…


Improve your personal and professional potential with soft skills—for free

Photo by zhang kaiyv on Unsplash

Soft skills are in demand, and improving your soft skills can boost your career prospects and personal wellbeing — best of all, you can learn soft skills online for free through flexible, engaging and self-paced courses.

That this article contains affiliate links. If you buy through these links, I may receive a small commission at no extra cost to you. I only suggest products or services that I have personally used or have otherwise vetted.

Soft skills are becoming more and more important in today’s workplace as employers recognize the value that they provide.

They’re at the heart of what…


Exploring causal relationships by asking ‘what if?’

Image by Author.

Counterfactual analysis (or counterfactual thinking) explores outcomes that did not actually occur, but which could have occurred under different conditions. It’s a kind of what if? analysis and is a useful way for testing cause-and-effect relationships.

Consider deciding which road to take driving home. You take Right Ave and encounter lots of traffic. But you could have taken Left Ave and had less traffic.

The outcome-less traffic-did not actually occur but could have occurred if you had taken a different road.

This is an example of a counterfactual, and in this case helps to test the causal relationship between the…


A clear explanation on whether topic modeling is a form of supervised or unsupervised learning

Image by Author

Topic modeling is a form of unsupervised learning.

It’s a branch of natural language processing that’s used for exploring unstructured data, typically text.

Topic modeling can be applied directly to the data being analyzed. It does not require labeled data or pre-training for its learning algorithm. This is why it is a form of unsupervised learning.

Being unsupervised, topic modeling is useful when annotated (labeled) data isn’t available. This is a major advantage of topic modeling, as most of the data that we encounter isn’t labeled, and labeling is time-consuming and expensive to do.

What is unsupervised learning?

Unsupervised learning refers to learning directly…


A concise explanation of what LDA topic modeling is and how it works

LDA topic modeling explained step by step
LDA topic modeling explained step by step

LDA topic modeling is topic modeling that uses a Latent Dirichlet Allocation (LDA) approach.

Topic modeling is a form of unsupervised learning. It can be used for exploring unstructured text data by inferring the relationships that exist between the words in a set of documents.

LDA is a popular topic modeling algorithm. It was developed in 2003 by researchers David Blei, Andrew Ng and Michael Jordan. It has grown in popularity due to its effectiveness and ease-of-use, and can easily be deployed in coding languages such as Java and Python.

How LDA topic modeling works

LDA discovers hidden, or latent, topics in a set of…

Model Interpretability, DATA SCIENCE PERSPECTIVE

For high-stakes decisions, it’s imperative

Photo by Beth Macdonald on Unsplash

A parole incorrectly denied. A weather report omitting harmful wild fire smoke. A saliency map confusing huskies with flutes. What’s going on? Black-box AI models are changing our world. But they’re also testing our tolerance for errors when outcomes matter. For high-stakes decisions — in legal, financial or health contexts, for instance — when things go wrong, we need to understand why. Explainable AI is leading the charge in explaining the way AI systems work, and making them more transparent, trustworthy and reliable as a result. …


Here’s what you need to know about evaluating topic models

A set of old text books on a table, tattered and worn
A set of old text books on a table, tattered and worn
Photo by Clarissa Watson on Unsplash

Topic models are widely used for analyzing unstructured text data, but they provide no guidance on the quality of topics produced. Evaluation is the key to understanding topic models. In this article, we’ll look at what topic model evaluation is, why it’s important and how to do it.


Topic modeling is a branch of natural language processing that’s used for exploring…


A simple example to help explain Bayes’ theorem

Bayes theorem is as easy as checking the weather
Bayes theorem is as easy as checking the weather
Photo by NOAA on Unsplash

Bayes’ theorem is a widely used statistical technique. Its applications range from clinical trials to email classification and its concepts underpin a range of broader use-cases. But is it possible to understand Bayes’ theorem without getting bogged down in detailed math or probability theory? Yes, it is — this article shows you how.

Bayes’ theorem is named after the English statistician and Presbyterian minister, Thomas Bayes, who formulated the theorem in the mid 1700’s. Unfortunately, Bayes never lived to see his theorem gain prominence, as it was published after his death.

Bayes’ theorem has since grown to become a widely…


A straight-forward introduction

A straight-forward introduction to what AI is and how it’s used
A straight-forward introduction to what AI is and how it’s used
Photo by Jelleke Vanooteghem on Unsplash

Artificial Intelligence (AI) is a term you’ve probably heard before — it’s having a huge impact on society and is widely used across a range of industries and applications. So, what exactly is AI and what can it do? Here’s a straightforward introduction.

Artificial Intelligence (AI) is a part of our daily lives — from language translation to medical diagnostics and driverless cars to facial recognition — it’s making more of an impact on industry and society every day.

But what exactly is AI?

Simply put, AI is a technology that replicates human intelligence through computers, systems or machines.


Giri Rabindranath

Analyst | Machine Learning Enthusiast | Blogger

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store