This notebook goes through a necessary step of any data science project - data cleaning. Data cleaning is a time consuming and unenjoyable task, yet it's a very important one. Keep in mind, "garbage in, garbage out". Feeding dirty data into a model will give us results that are meaningless.
Specifically, we'll be walking through:
1.**Getting the data - **in this case, we'll be scraping data from a website
2.**Cleaning the data - **we will walk through popular text pre-processing techniques
3.**Organizing the data - **we will organize the cleaned data into a way that is easy to input into other algorithms
The output of this notebook will be clean, organized data in two standard text formats:
1.**Corpus** - a collection of text
2.**Document-Term Matrix** - word counts in matrix format
%% Cell type:markdown id: tags:
## Problem Statement
%% Cell type:markdown id: tags:
As a reminder, our goal is to look at transcripts of various comedians and note their similarities and differences. Specifically, I'd like to know if Ali Wong's comedy style is different than other comedians, since she's the comedian that got me interested in stand up comedy.
%% Cell type:markdown id: tags:
## Getting The Data
%% Cell type:markdown id: tags:
Luckily, there are wonderful people online that keep track of stand up routine transcripts. [Scraps From The Loft](http://scrapsfromtheloft.com) makes them available for non-profit and educational purposes.
To decide which comedians to look into, I went on IMDB and looked specifically at comedy specials that were released in the past 5 years. To narrow it down further, I looked only at those with greater than a 7.5/10 rating and more than 2000 votes. If a comedian had multiple specials that fit those requirements, I would pick the most highly rated one. I ended up with a dozen comedy specials.
%% Cell type:code id: tags:
``` python
# Web scraping, pickle imports
importrequests
frombs4importBeautifulSoup
importpickle
# Scrapes transcript data from scrapsfromtheloft.com
defurl_to_transcript(url):
'''Returns transcript data specifically from scrapsfromtheloft.com.'''
# # Actually request transcripts (takes a few minutes to run)
# transcripts = [url_to_transcript(u) for u in urls]
```
%% Cell type:code id: tags:
``` python
# # Pickle files for later use
# # Make a new directory to hold the text files
# !mkdir transcripts
# for i, c in enumerate(comedians):
# with open("transcripts/" + c + ".txt", "wb") as file:
# pickle.dump(transcripts[i], file)
```
%% Cell type:code id: tags:
``` python
# Load pickled files
data={}
fori,cinenumerate(comedians):
withopen("transcripts/"+c+".txt","rb")asfile:
data[c]=pickle.load(file)
```
%% Cell type:code id: tags:
``` python
# Double check to make sure data has been loaded properly
data.keys()
```
%% Cell type:code id: tags:
``` python
# More checks
data['louis'][:2]
```
%% Cell type:markdown id: tags:
## Cleaning The Data
%% Cell type:markdown id: tags:
When dealing with numerical data, data cleaning often involves removing null values and duplicate data, dealing with outliers, etc. With text data, there are some common data cleaning techniques, which are also known as text pre-processing techniques.
With text data, this cleaning process can go on forever. There's always an exception to every cleaning step. So, we're going to follow the MVP (minimum viable product) approach - start simple and iterate. Here are a bunch of things you can do to clean your data. We're going to execute just the common cleaning steps here and the rest can be done at a later point to improve our results.
**Common data cleaning steps on all text:**
* Make text all lower case
* Remove punctuation
* Remove numerical values
* Remove common non-sensical text (/n)
* Tokenize text
* Remove stop words
**More data cleaning steps after tokenization:**
* Stemming / lemmatization
* Parts of speech tagging
* Create bi-grams or tri-grams
* Deal with typos
* And more...
%% Cell type:code id: tags:
``` python
# Let's take a look at our data again
next(iter(data.keys()))
```
%% Cell type:code id: tags:
``` python
# Notice that our dictionary is currently in key: comedian, value: list of text format
next(iter(data.values()))
```
%% Cell type:code id: tags:
``` python
# We are going to change this to key: comedian, value: string format
defcombine_text(list_of_text):
'''Takes a list of text and combines them into one large chunk of text.'''
**NOTE:** This data cleaning aka text pre-processing step could go on for a while, but we are going to stop for now. After going through some analysis techniques, if you see that the results don't make sense or could be improved, you can come back and make more edits such as:
* Mark 'cheering' and 'cheer' as the same word (stemming / lemmatization)
* Combine 'thank you' into one term (bi-grams)
* And a lot more...
%% Cell type:markdown id: tags:
## Organizing The Data
%% Cell type:markdown id: tags:
I mentioned earlier that the output of this notebook will be clean, organized data in two standard text formats:
1.**Corpus - **a collection of text
2.**Document-Term Matrix - **word counts in matrix format
%% Cell type:markdown id: tags:
### Corpus
%% Cell type:markdown id: tags:
We already created a corpus in an earlier step. The definition of a corpus is a collection of texts, and they are all put together neatly in a pandas dataframe here.
For many of the techniques we'll be using in future notebooks, the text must be tokenized, meaning broken down into smaller pieces. The most common tokenization technique is to break down text into words. We can do this using scikit-learn's CountVectorizer, where every row will represent a different document and every column will represent a different word.
In addition, with CountVectorizer, we can remove stop words. Stop words are common words that add no additional meaning to text such as 'a', 'the', etc.
%% Cell type:code id: tags:
``` python
# We are going to create a document-term matrix using CountVectorizer, and exclude common English stop words
After the data cleaning step where we put our data into a few standard formats, the next step is to take a look at the data and see if what we're looking at makes sense. Before applying any fancy algorithms, it's always important to explore the data first.
When working with numerical data, some of the exploratory data analysis (EDA) techniques we can use include finding the average of the data set, the distribution of the data, the most common values, etc. The idea is the same when working with text data. We are going to find some more obvious patterns with EDA before identifying the hidden patterns with machines learning (ML) techniques. We are going to look at the following for each comedian:
1.**Most common words** - find these and create word clouds
2.**Size of vocabulary** - look number of unique words and also how quickly someone speaks
**NOTE:** At this point, we could go on and create word clouds. However, by looking at these top words, you can see that some of them have very little meaning and could be added to a stop words list, so let's do just that.
%% Cell type:code id: tags:
``` python
# Look at the most common top words --> add them to the stop word list