[Python]Principal Component Analysis and K-means clustering with IMDB movie datasets

Hello, today’s post would be the first post that I present the result in Python! Although I love R and I’m loyal to it, Python is widely loved by many data scientists.  Python is quite easy to learn and it has a lot of great functions.

In this post, I implemented unsupervised learning methods: 1. Principal Component Analysis and 2. K-means Clustering. Then a reader who has no background knowledge in Machine Learning would think,”what the hell is unsupervised learning?” I will try my best to explain this concept

Unsupervised Learning

Ok, let’s imagine you are going to backpacking to a new country. Isn’t it exciting? But you did not know much about the country – their food, culture, language etc. However from day 1, you start making sense there, learning to eat new cuisines including what not to eat, find a way to that beach.

In this example,you have lots of information but you do not know what to do with it initially. There is no clear guidance and you have to find the way by yourself. Like this traveling example, unsupervised learning is the method of training your machine learning task only with a set of inputs. Principal Component Analysis and K-means clustering are the most famous examples of unsupervised learning. I will explain them a little bit later.

Data

Before I begin talking about how I analyzed the data, let’s talk about the data. There are total 5,043 movies with 28 attributes. The attributes range from director name to the number of facebook likes.

Screen Shot 2017-09-07 at 9.41.16 PM

1. Data Cleaning

In Statistics class, we often get clean data: no missing values, no NA values. But in reality, the clean data is just like a dream. There are always some messed part of the data and it’s our job to trim the data useable before executing the analysis.

Here are some libraries you need for this post.

Screen Shot 2017-09-07 at 9.46.26 PM

First, let’s do some filtering to extract only the numbered columns and not the ones with words. So, I created a Python list containing the numbered column names “num_list”

Screen Shot 2017-09-07 at 9.45.39 PM

By the way, when it comes to using Python, pandas library is a must-have item. Using pandas library, we can create a new dataframe (movie_num) containing just the numbers

Screen Shot 2017-09-07 at 9.48.19 PM

By using function “fillna(filtering NA)”, we can easily discard NaN values.

If the distribution of certain variables are skewed, we can implement standardization.

Screen Shot 2017-09-07 at 9.50.24 PM

2. Correlation Analysis

Hexbin Plot

Let’s look at some hexbin visualisations first to get a feel for how the correlations between the different features compare to one another. In the hexbin plots, the lighter in color the hexagonal pixels, the more correlated one feature is to another.

Screen Shot 2017-09-07 at 9.52.24 PM

Screen Shot 2017-09-07 at 9.16.31 PM

This is a Hexbin Plot between IMDB Scroe and gross revenue. We can see it’s lighter around the score between 6 and 7.

 

Screen Shot 2017-09-07 at 9.16.22 PM

This is a Hexbin Plot between IMDB Scroe and duration(days). Again, the score between 6 and 7 is lighter.

We can examine the correlation more using Pearson correlation plot.

Screen Shot 2017-09-07 at 9.58.22 PM.png

Screen Shot 2017-09-07 at 9.17.40 PM.png

As we can see from the heatmap, there are regions (features) where we can see quite positive linear correlations amongst each other, given the darker shade of the colours – top left-hand corner and bottom right quarter. This is a good sign as it means we may be able to find linearly correlated features for which we can perform PCA projections on.

3. EXPLAINED VARIANCE MEASURE &Principal Component Analysis

Now you know what unsupervised learning is (I hope so). Then, let me explain about principal component analysis. The explanation would not be as entertaining as the one in unsupervised learning but I’ll try my best!

Principal component analysis (PCA) is a technique used to emphasize variation and bring out strong patterns in a dataset. It’s often used to make data easy to explore and visualize.  Principal components are dimensions along which your data points are most spread out:

Screen Shot 2017-09-07 at 10.18.24 PM.png

<From: https://www.quora.com/What-is-an-intuitive-explanation-for-PCA>

Let me give you an example. Imagine that you are a nutritionist trying to explore the nutritional content of food. What is the best way to differentiate food items? By vitamin content? Protein levels? Or perhaps a combination of both?

Knowing the variables that best differentiate your items has several uses:

1. Visualization. Using the right variables to plot items will give more insights.

2. Uncovering Clusters. With good visualizations, hidden categories or clusters could be identified. Among food items for instance, we may identify broad categories like meat and vegetables, as well as sub-categories such as types of vegetables.

The question is, how do we derive the variables that best differentiate items?

So, the first step to answer this question is Principal Component Analysis.

A principal component can be expressed by one or more existing variables. For example, we may use a single variable – vitamin C – to differentiate food items. Because vitamin C is present in vegetables but absent in meat, the resulting plot (below, left) will differentiate vegetables from meat, but meat items will clumped together.

To spread the meat items out, we can use fat content in addition to vitamin C levels, since fat is present in meat but absent in vegetables. However, fat and vitamin C levels are measured in different units. So to combine the two variables, we first have to normalize them, meaning to shift them onto a uniform standard scale, which would allow us to calculate a new variable – vitamin C – fat. Combining the two variables helps to spread out both vegetable and meat items.

The spread can be further improved by adding fiber, of which vegetable items have varying levels. This new variable – (vitamin C + fiber) – fat – achieves the best data spread yet.

 

So,  that’s my explanation of Principal Component analysis and K-means clustering at the same time. Let me apply Principal Component Analysis to this dataset and show how it works.

Explained Variance Measure

I will be using a particular measure called Explained Variance which will be useful in this context to help us determine the number of PCA projection components we should be looking at.

Before calculating explained variance, we need to get eigenvectors and eigenvalues.The eigenvectors and eigenvalues of a covariance (or correlation) matrix represent the “core” of a PCA: The eigenvectors (principal components) determine the directions of the new feature space, and the eigenvalues determine their magnitude. In other words, the eigenvalues explain the variance of the data along the new feature axes.

 

Screen Shot 2017-09-07 at 10.04.38 PM.png

After sorting the eigenpairs, the next question is “how many principal components are we going to choose for our new feature subspace?”. The explained variance tells us how much information (variance) can be attributed to each of the principal components.

Screen Shot 2017-09-07 at 10.05.06 PM.png

Screen Shot 2017-09-07 at 10.06.02 PMScreen Shot 2017-09-07 at 9.19.19 PM

From the plot above, it can be seen that approximately 90% of the variance can be explained with the 9 principal components. Therefore for the purposes of this notebook, let’s implement PCA with 9 components ( although to ensure that we are not excluding useful information, one should really go for 95% or greater variance level which corresponds to about 12 components).

Screen Shot 2017-09-07 at 10.07.13 PMScreen Shot 2017-09-07 at 9.21.01 PM

There does not seem to be any discernible clusters. However keeping in mind that our PCA projections contain another 7 components, perhaps looking at plots with the other components may be fruitful. For now, let us assume that will be trying a 3-cluster (just as a naive guess) KMeans to see if we are able to visualize any distinct clusters.

5.Visualization with K-means clustering

Screen Shot 2017-09-07 at 10.09.22 PMScreen Shot 2017-09-07 at 9.21.52 PM

This KMeans plot looks more promising now as if our simple clustering model assumption turns out to be right, we can observe 3 distinguishable clusters via this color visualization scheme. However I would also like to generate a KMeans visualization for other possible combinations of the projections against one another. I will use Seaborn’s convenient pairplot function to do the job. Basically pairplot automatically plots all the features in the dataframe (in this case our PCA projected movie data) in pairwise manner. I will pairplot the first 3 projections against one another and the resultant plot is given below:

Screen Shot 2017-09-07 at 10.10.36 PM.png

Screen Shot 2017-09-07 at 9.23.08 PM

 

Advertisements