[R] Google Map Visualization

Hello, for this post, I will show how to visualize spatial data on Google Map using R. It is simpler than you think.

What is Spatial Data?

it is the data or information that identifies the geographic location of features and boundaries. The data that I’m using today has longitude variable and latitude variable so that we can locate the data points accurately on the map.

Now you know what spatial data is roughly so let’s jump into the map visualization.

First, Download the libraries 

Screen Shot 2017-09-25 at 10.12.15 AM.png

In ggmap, you need ggplot2 package.

ggmap library contains all the information of google map so we can see every city map as we want to.

Second, Call the Google map image

For example, I want to see London Google map. In this case, I can simply use qmap command in ggmap and set the location equal to London.

Screen Shot 2017-09-25 at 10.13.55 AM.png

Screen Shot 2017-09-22 at 12.10.52 PM

Then, you can get nice image of London Google map.

 

But the data I’m using is about crimes in Houston so let’s change it to Houston instead.

Screen Shot 2017-09-25 at 10.17.38 AM.png

Using ‘names’ command, we can get an overview of the variables in the data

Screen Shot 2017-09-25 at 10.17.59 AM.png

For spatial data, as I mentioned in the first paragraph, “lon” and “lat” variables are necessary.

Using ‘dim’ command, we can get the number of rows and columns. Multiplication between rows and columns make dimensions. From Jan 2010 to Aug 2010 in Houston, there were 86,314 crimes. Quite extraordinary!

Screen Shot 2017-09-25 at 10.18.04 AM

#Point Data Visualization

Screen Shot 2017-09-25 at 10.20.59 AM.png

Simply, we can use geom_point in ggplot2 package to demonstrate the point map visualization. In this case, I wanted to see the frequencies of different types of crimes.

Screen Shot 2017-09-22 at 12.04.06 PM

Pink color is pretty dominant and it indicates that theft is the most predominant crimes in Houston from Jan 2010 to Aug 2010. The second most frequent crime is burglary(the color is confusing, I just hope it’s not murder). Auto theft occurred occasionally.

 

#Heat Map

If you want to see the density and frequency of the crimes, heat map is the effective.

Screen Shot 2017-09-25 at 10.26.16 AM.png

In this case, we can use stat_density2d for this kind of visualization.

Screen Shot 2017-09-22 at 12.07.14 PM

From this heat map, we can observe which area is the most crime-ridden area. Luckily, the campus areas are relatively safer. And it corresponds to the point map that there are lots of points in the first map where it is red in this map. And the red area is the heart of Houston downtown. I hope it has been getting better since then but looks like we’d better be careful around the downtown Houston.

 

 

 

 

Advertisements

[R] Harry Potter Sentiment Analysis

Last time, I created word clouds based on Harry Potter. In this post, I will discuss how emotions change throughout each chapter for each book.

  1. Download these libraries

Screen Shot 2017-09-16 at 3.11.45 PM

This time, you need to download “sentimentr” this time. Lots of useful work can be done by tokenizing at the word level, but sometimes it is useful or necessary to look at different units of text. For example, some sentiment analysis algorithms look beyond only unigrams (i.e. single words) to try to understand the sentiment of a sentence as a whole. These algorithms try to understand that

I’m not having a good day.

is a sad sentence, not a happy one, because of negation. The  sentimentr R package are examples of such sentiment analysis algorithms. For these, we may want to tokenize text into sentences.

2. Tokenize text into sentences.

Screen Shot 2017-09-16 at 3.11.50 PM

The argument token= sentences attempts to break up text by punctuation.

3. Break up the  text by chapter and sentence.

Screen Shot 2017-09-16 at 3.11.56 PM

This will allow us to assess the net sentiment by chapter and by sentence. First, we need to track the sentence numbers and then I create an index that tracks the progress through each chapter. I then unnest the sentences by words. This gives us a tibble that has individual words by sentence within each chapter.

4. Join “afinn” lexicon and compute the net sentiment score

Screen Shot 2017-09-16 at 3.12.02 PM

Now, as before, I join the AFINN lexicon and compute the net sentiment score for each chapter.The AFINN lexicon assigns words with a score that runs between -5 and 5, with negative scores indicating negative sentiment and positive scores indicating positive sentiment.

5. Visualize using ggplot

Screen Shot 2017-09-16 at 3.12.18 PM

Result

 

philosopher

<Philosopher’s Stone>

This book has the least number of chapters among all seven books. The range of sentiment is from -20 to 15 and it’s the narrowest range of sentiment as well. We can observe that the first chapter is emotionally neutral while chapter 17 contains most emotionally negative and most emotionally positive at the same time. We can see that the ending is relatively happy ending in this book.

 

 

chamber of secrets

<Chamber of Secrets>

It also has the narrowest range of emotions with more chapters. About 25% progress in chapter 1, there is a quite conspicuous negative part and I wonder what it was about.

 

prisoner of azkaban

<Prisoner of Azkaban>

It looks like Prisoner Azkaban does not have many emotionally positive parts. We can see that the highest score is relatively lower than the other two previous series. Instead, the minimum value got lower which indicates that the net sentiment score is lower. Especially, at 50% of chapter 16, we can see dart red color. It indicates that Prisoner of Azkaban got darker than previous ones. But still, it is happy ending.

goblet of fire

<Goblet of Fire>

From this book, J.K Rowling started to include more chapters and Goblet of fire has 37 chapters. Emotional range is similar with previous three books. Compared to Prisoner of Azkaban, there are some noticeable blue parts and it may be because Harry Potter getting high scores in Tri Wizard competition was quite exciting. But there are also some red parts which may include Harry Potter being scorned by friends and the death of Cedric. That’s why the ending part is relatively neutral.

 

orderof Phoenix

<Order of Phoenix>

I feel this one is slightly more colorful than the previous ones.  There are a lot of blues around the middle of the stories but as it goes by, red is pretty dominant. Considering  Sirius Black was killed at the end, it explains why the ending part is not happy ending.

 

Half blood Prince

<Half Blood Prince>

It is somehow less colorful than Order of Phoenix. We also should notice that the highest value is the largest in this book. For example, past 75% in chapter 4, the net score is around 30 (I forgot why). Also, there is the darkest red part in chapter 28: 50% and it may be the moment when Dumbledore was killed.

Deathly Hallows

<Deathly Hallows>

Interestingly, this book has the lowest net score: -40. In Chapter 17 after 50% progression, there is -40 part. According to the story, it is the part where Harry confronted Bathilda changing into a snake. Besides that, we can see that the negative and neutral sentiment is dominant in this one. But we know that it ends well!

 

[Python]Principal Component Analysis and K-means clustering with IMDB movie datasets

Hello, today’s post would be the first post that I present the result in Python! Although I love R and I’m loyal to it, Python is widely loved by many data scientists.  Python is quite easy to learn and it has a lot of great functions.

In this post, I implemented unsupervised learning methods: 1. Principal Component Analysis and 2. K-means Clustering. Then a reader who has no background knowledge in Machine Learning would think,”what the hell is unsupervised learning?” I will try my best to explain this concept

Unsupervised Learning

Ok, let’s imagine you are going to backpacking to a new country. Isn’t it exciting? But you did not know much about the country – their food, culture, language etc. However from day 1, you start making sense there, learning to eat new cuisines including what not to eat, find a way to that beach.

In this example,you have lots of information but you do not know what to do with it initially. There is no clear guidance and you have to find the way by yourself. Like this traveling example, unsupervised learning is the method of training your machine learning task only with a set of inputs. Principal Component Analysis and K-means clustering are the most famous examples of unsupervised learning. I will explain them a little bit later.

Data

Before I begin talking about how I analyzed the data, let’s talk about the data. There are total 5,043 movies with 28 attributes. The attributes range from director name to the number of facebook likes.

Screen Shot 2017-09-07 at 9.41.16 PM

1. Data Cleaning

In Statistics class, we often get clean data: no missing values, no NA values. But in reality, the clean data is just like a dream. There are always some messed part of the data and it’s our job to trim the data useable before executing the analysis.

Here are some libraries you need for this post.

Screen Shot 2017-09-07 at 9.46.26 PM

First, let’s do some filtering to extract only the numbered columns and not the ones with words. So, I created a Python list containing the numbered column names “num_list”

Screen Shot 2017-09-07 at 9.45.39 PM

By the way, when it comes to using Python, pandas library is a must-have item. Using pandas library, we can create a new dataframe (movie_num) containing just the numbers

Screen Shot 2017-09-07 at 9.48.19 PM

By using function “fillna(filtering NA)”, we can easily discard NaN values.

If the distribution of certain variables are skewed, we can implement standardization.

Screen Shot 2017-09-07 at 9.50.24 PM

2. Correlation Analysis

Hexbin Plot

Let’s look at some hexbin visualisations first to get a feel for how the correlations between the different features compare to one another. In the hexbin plots, the lighter in color the hexagonal pixels, the more correlated one feature is to another.

Screen Shot 2017-09-07 at 9.52.24 PM

Screen Shot 2017-09-07 at 9.16.31 PM

This is a Hexbin Plot between IMDB Scroe and gross revenue. We can see it’s lighter around the score between 6 and 7.

 

Screen Shot 2017-09-07 at 9.16.22 PM

This is a Hexbin Plot between IMDB Scroe and duration(days). Again, the score between 6 and 7 is lighter.

We can examine the correlation more using Pearson correlation plot.

Screen Shot 2017-09-07 at 9.58.22 PM.png

Screen Shot 2017-09-07 at 9.17.40 PM.png

As we can see from the heatmap, there are regions (features) where we can see quite positive linear correlations amongst each other, given the darker shade of the colours – top left-hand corner and bottom right quarter. This is a good sign as it means we may be able to find linearly correlated features for which we can perform PCA projections on.

3. EXPLAINED VARIANCE MEASURE &Principal Component Analysis

Now you know what unsupervised learning is (I hope so). Then, let me explain about principal component analysis. The explanation would not be as entertaining as the one in unsupervised learning but I’ll try my best!

Principal component analysis (PCA) is a technique used to emphasize variation and bring out strong patterns in a dataset. It’s often used to make data easy to explore and visualize.  Principal components are dimensions along which your data points are most spread out:

Screen Shot 2017-09-07 at 10.18.24 PM.png

<From: https://www.quora.com/What-is-an-intuitive-explanation-for-PCA>

Let me give you an example. Imagine that you are a nutritionist trying to explore the nutritional content of food. What is the best way to differentiate food items? By vitamin content? Protein levels? Or perhaps a combination of both?

Knowing the variables that best differentiate your items has several uses:

1. Visualization. Using the right variables to plot items will give more insights.

2. Uncovering Clusters. With good visualizations, hidden categories or clusters could be identified. Among food items for instance, we may identify broad categories like meat and vegetables, as well as sub-categories such as types of vegetables.

The question is, how do we derive the variables that best differentiate items?

So, the first step to answer this question is Principal Component Analysis.

A principal component can be expressed by one or more existing variables. For example, we may use a single variable – vitamin C – to differentiate food items. Because vitamin C is present in vegetables but absent in meat, the resulting plot (below, left) will differentiate vegetables from meat, but meat items will clumped together.

To spread the meat items out, we can use fat content in addition to vitamin C levels, since fat is present in meat but absent in vegetables. However, fat and vitamin C levels are measured in different units. So to combine the two variables, we first have to normalize them, meaning to shift them onto a uniform standard scale, which would allow us to calculate a new variable – vitamin C – fat. Combining the two variables helps to spread out both vegetable and meat items.

The spread can be further improved by adding fiber, of which vegetable items have varying levels. This new variable – (vitamin C + fiber) – fat – achieves the best data spread yet.

 

So,  that’s my explanation of Principal Component analysis and K-means clustering at the same time. Let me apply Principal Component Analysis to this dataset and show how it works.

Explained Variance Measure

I will be using a particular measure called Explained Variance which will be useful in this context to help us determine the number of PCA projection components we should be looking at.

Before calculating explained variance, we need to get eigenvectors and eigenvalues.The eigenvectors and eigenvalues of a covariance (or correlation) matrix represent the “core” of a PCA: The eigenvectors (principal components) determine the directions of the new feature space, and the eigenvalues determine their magnitude. In other words, the eigenvalues explain the variance of the data along the new feature axes.

 

Screen Shot 2017-09-07 at 10.04.38 PM.png

After sorting the eigenpairs, the next question is “how many principal components are we going to choose for our new feature subspace?”. The explained variance tells us how much information (variance) can be attributed to each of the principal components.

Screen Shot 2017-09-07 at 10.05.06 PM.png

Screen Shot 2017-09-07 at 10.06.02 PMScreen Shot 2017-09-07 at 9.19.19 PM

From the plot above, it can be seen that approximately 90% of the variance can be explained with the 9 principal components. Therefore for the purposes of this notebook, let’s implement PCA with 9 components ( although to ensure that we are not excluding useful information, one should really go for 95% or greater variance level which corresponds to about 12 components).

Screen Shot 2017-09-07 at 10.07.13 PMScreen Shot 2017-09-07 at 9.21.01 PM

There does not seem to be any discernible clusters. However keeping in mind that our PCA projections contain another 7 components, perhaps looking at plots with the other components may be fruitful. For now, let us assume that will be trying a 3-cluster (just as a naive guess) KMeans to see if we are able to visualize any distinct clusters.

5.Visualization with K-means clustering

Screen Shot 2017-09-07 at 10.09.22 PMScreen Shot 2017-09-07 at 9.21.52 PM

This KMeans plot looks more promising now as if our simple clustering model assumption turns out to be right, we can observe 3 distinguishable clusters via this color visualization scheme. However I would also like to generate a KMeans visualization for other possible combinations of the projections against one another. I will use Seaborn’s convenient pairplot function to do the job. Basically pairplot automatically plots all the features in the dataframe (in this case our PCA projected movie data) in pairwise manner. I will pairplot the first 3 projections against one another and the resultant plot is given below:

Screen Shot 2017-09-07 at 10.10.36 PM.png

Screen Shot 2017-09-07 at 9.23.08 PM

 

Game of Thrones Battle Analysis

Today is a big day for GOT fans: it’s the day of the last episode of season 7. Before you watch this episode, I prepared the Game of thrones battle analysis. The data itself is not the most updated one but I think it will give you guys good insight about the battles in GOT.

Data

The GOT battle data has 38 observations with 25 variables.

 

Screen Shot 2017-08-27 at 11.50.36 AM.png

The variables include attacker, defender, family, year and outcome of the battle.

Analysis

1.Is Size of the Army Often Decides the Outcome of the Battle?

Screen Shot 2017-08-27 at 11.52.20 AMattacker size

So, in this graph there is a blue line and the pink line. The blue line indicates the smoothing line using linear regression while the pink line indicated the smoothing line using loess. Linear regression is straight forward: you are looking for straight line that minimizes residual sum of errors. The Loess involves nonparametric statistics that allow non linearity.

Larger the size does not mean a guaranteed victory. For example, Mance Rayder defeated Stannis Baratheon brutally. Stannis was marching with 100,000 soldiers and Mance Rayder was with less than 1500 member troop.

Since I already demonstrated these kinds of ggplots, I’ll just skip the code part.

2.Which king fought maximum number of battles?

 

attacker.jpeg

(pink indicated NA)

We can see that Joffrey/Tommen Baratheon have attacked the most followed by Robb Stark. Joffrey never participated in a battle directly but his brutal attitude and shocking decisions are unforgettable.

3.#How the Commanders of the Attacking Kings have Performed?

 

Screen Shot 2017-08-27 at 11.42.28 AM

 

Looks like Gregor Clegane had fought most number of battles and won all of them for Joffrey.

 

4. What are the Different Types of Battles fought and what their counts are?

 

battle type

Pitched battle is the most common battle type followed by siege.

5. In Which Regions Battles were Fought, Who were all the attackers?

Screen Shot 2017-08-27 at 11.43.48 AM

According to the data all the kings fought in The North. Joffrey fought most of his battles in The Riverlands. Joffrey’s fights are mainly to defend the Kings Landing because the The Riverlands are Between Everything and Everything Else

 

6.Type of Battles and The Attacker Kings

 

Screen Shot 2017-08-27 at 11.44.29 AM

Stannis Baratheon had the largest army among all other Kings. He never needed to have an ambush, on the other hand Robb Stark was slowly building his place so he used ambush the most. Data proves he ambushed most of the time and acquired his army before his bloody deathy during Red Wedding.

 

7. Kings and Their Army Strength

 

Screen Shot 2017-08-27 at 11.45.43 AM

 

8. Kings vs Kings

 

Screen Shot 2017-08-27 at 11.45.10 AM

Joffrey had fought agains almost all other kings except the wildlings. He was quite far from The Wall and there is no need for him to cross the wall. However, Robb Stark was quite focussed… His quest was to take revenge on his old friends the heirs of Robert Baratheon who kill his father Ned Stark.

 

Things to do next

  1. Work on character death analysis prediction
  2. Battle prediction

[R]Create word cloud with Harry Potter

Nowadays I’m trying to learn text analysis by myself. I came across how to create word cloud while exploring information on text/sentiment analysis. Sentiment analysis is quite tricky but I’m learning it. I hope I can demonstrate it in the near future but let’s start with word cloud.

Data

I found this harry potter dataset from here. I installed harry potter package using this:

Screen Shot 2017-08-21 at 9.38.51 AM

This package contains all the full text for the seven books so the data is completely text this time.

 

Word Cloud

Now I will demonstrate how to create word cloud. It’s completely data visualization so it involved very little statistics( or it’s free of statistics).

Step 1 Install these packages

Screen Shot 2017-08-21 at 9.42.15 AM

Those are the packages that you need for creating word cloud.

Step 2  Create Corpus

Screen Shot 2017-08-21 at 9.45.28 AM

Corpus is a document containing (natural language) text. It’s usually large and well structured. In this case, I created corpus with philosopher stone.

Step 3 Let’s convert the corpus to plain text document

Screen Shot 2017-08-21 at 9.58.57 AM

Step 4  Let’s convert the corpus to plain text document

Screen Shot 2017-08-21 at 10.00.09 AM

In this way, we can remove a lot of redundant stuffs in the context. The examples of ‘Stopwords(‘english’) are :

Screen Shot 2017-08-21 at 10.02.45 AM

From this process, we can trim the texts and extract fundamental texts that we need.

Step 5 Create the new corpus with the polished one and perform stemming.Screen Shot 2017-08-21 at 10.04.25 AM

As I wrote in the comment, stemming transforms words into the most basic form

Step 6 Lastly, create word cloudScreen Shot 2017-08-21 at 10.06.08 AM

max.words controls the maximum number of word cloud. Adding colors makes the word cloud look prettier.

Result

philospher

<Philospher’s stone>

Not surprisingly, Harry is the most frequently mentioned. Looks like JK Rowling likes to use ‘said‘ and ‘look‘. Ron and Hagrid  are slightly more appeared than Hermione and Dumbledore.

chamber

 

<Chamber of Secrets>

In this series, Ron is more emphasized than the previous one. Compared to the last one,Malfoy is relatively more popular in this one. In chamber of secrets, looking into basilisk eyes directly caused death so ‘eyes‘ are also one of the most popular words in this one. If you see it closely, you can also see lockhart.

 

prisoner

<Prisoner of Azkaban>

Now we can see Snape is appeared in non-green color. It implies that Snape can be an important figure as the series goes by. As you remember, this was the first novel that involved with Sirius Black and you can see his name here. And Hagrid is back here again.

goblet

<Goblet of Fire>

In Goblet of Fire, we can see the significance of Dumbledore arose. Since Harry went to Triwizard match with Weasley family, we can see Weasley in here as well.

 

order phoenix

<order of phoenix>

Not so diffrent from previous one, but we can notice Umbridge appears here.

half blood

<Half Blood Prince>

In this series, Dumbledore appears more than Ron and Hermione. If you know the plot of this episode, it’s reasonable why Dumbledore appears more than the couple. Also, we can see Slughorn which makes it distinctive from other series. Malfoy and Snape are back there again in non-green color.

deathly

<Deathly Hallows>

In this novel, the main part is the risky and important adventure with Ron and Hermione. No wonder why they are the largest besides ‘Harry’ and ‘Said’. Since this series reveals that there is a legendary wand, wand appears as almost important as dumbledore.  For verb, you can see ‘think’ and it may imply that there are more internal conversations within the character. Voldemort didn’t appear too much in these word clouds and we can see his name in this one.

<Summary>

 

We can see Harry is truly the center of the series since the title is already Harry Potter. To me, it’s surprising to see Voldemort didn’t appear as much as I thought. Since ‘said‘ is the most frequently used verb, it indicates that the many part of the novels is based on the conversation. Besides Harry, Ron, Hermione and Dumbledore are the most important in this series.

Random Forest with Pokemon dataset (with correlation plot) with R

I was so pleased that so many people read my first statistical blog post: “which u.s. state does produce the most beer?”. 

So, I decided to write another one. For this blog post, I worked on pokemon dataset from Kaggle.

Before talking about random forest ,which is one of the most popular machine learning methods,I will begin with corrleation plot. It’s like an appetizer before the main course.

 

Brief Idea of the dataset

Last time, I felt bad that I forgot to give a brief idea of the data. From now on, I will make sure to provide how the data looks like for each post.

Here is the snapshot of the head of the dataset. This dataset has total 13 variables with 800 pokemons.

Screen Shot 2017-08-13 at 3.35.56 PM

  • Number: ID for each pokemon
  • Name: Name of each pokemon
  • Type 1: Each pokemon has a type, this determines weakness/resistance to attacks
  • Type 2: Some pokemon are dual type and have 2
  • Total: sum of all stats that come after this, a general guide to how strong a pokemon is
  • HP: hit points, or health, defines how much damage a pokemon can withstand before fainting
  • Attack: the base modifier for normal attacks (eg. Scratch, Punch)
  • Defense: the base damage resistance against normal attacks
  • SP Atk: special attack, the base modifier for special attacks (e.g. fire blast, bubble beam)
  • SP Def: the base damage resistance against special attacks
  • Speed: determines which pokemon attacks first each round

 

Correlation Plot

R has a “corrplot” package and it offers great quality of visualzition on correlation analysis.

Since correlation analysis is for numerical variables, let’s separate variables into categorical and numerical.

Screen Shot 2017-08-13 at 3.41.31 PM

Screen Shot 2017-08-13 at 3.42.05 PM

Result

 

corrplot

Simple,isn’t it? Now when you have a simple statistical report homework or work to submit, you can use this library and simply use “corrplot” function.It looks like total variable(dependent variable) is quite correlated with the rest of the variables.  Interestingly, speed is not so related with defense skill but slightly related with special defense.

 

Random Forest

I know this concept might be new to many readers for this post. But let me try to explain this concept as simple as possible.

 

<Image: decision tree from https://www.edureka.co/blog/decision-trees/&gt;

I think many of you have seen this tree. This is called decision tree. If you understand this, you are on the half way of understanding random forest. There are two keywords here – random and forests. Random forest is a collection of many decision trees. Instead of relying on a single decision tree, you build many decision trees say 100 of them. And you know what a collection of trees is called – a forest. And for higher accuracy, it’s randomized.

 

Let’s begin!

 

Step 1:  Divide the dataset into training set and test set (for Cross validation).

The first step is randomly select “k” features from total “m” features.

Screen Shot 2017-08-13 at 3.59.07 PM

In this example, I randomly assigned 70% of the data as “training” set while the rest of the data is assigned as “test” set. This procedure is called “Cross Validation“.

Then, what is Cross Validation?

<Image: Cross Validation from https://www.edureka.co/blog/implementation-of-decision-tree>

This image is quite self explanatory but let me elaborate. The example is 5- fold cross validation. For each fold, the each test set doesn’t duplicate to another. Using measurement metric(e.g. Mean Absolute Error, Root Suared Mean Error) , it averages the final measure of performance for each fold. By cross validation, it ensures randomization.

Step 2:  Build the random forest model

Screen Shot 2017-08-13 at 3.59.42 PM

Screen Shot 2017-08-13 at 3.59.46 PM

Luckily, R is an open source so there are a lot of packages that make people life easier. Random Forest package provides randomForest function that enables to build random forest model so easily. After building the model on the train dataset, test the prediction on the test dataset.

 

Step 3: Variable Importance

Screen Shot 2017-08-13 at 4.00.03 PM

Screen Shot 2017-08-13 at 3.23.07 PM

After building the random forest model, you can examine variable importance for the model. Again, I’m using ggplot to create nicer looking graphs.  We can see that generation is the least important while special defense is the most significant in the model.

Step 4: Examine how the model is performing

Screen Shot 2017-08-13 at 4.18.55 PM

Screen Shot 2017-08-13 at 4.21.10 PM

I’m using the two measurements in here: R-squared and MSE. R squared indicates how close the data are to the fitted regression line and MSE is the squared difference between the estimator and what is estimated. In short, the higher R-squared and lower MSE make the better model.

 

Screen Shot 2017-08-13 at 4.22.15 PM

As a result, R squared is 0.93 and MSE is 994.81. It means that the model explains 93% of the variability of the response data around its mean.

I hope you guys enjoyed reading this!

Data Source: https://www.kaggle.com/abcsds/pokemon

Which U.S. State does produce the most beer?/R using ggplot package with map

Well, I was trying not to talk about what I do for a living in the blog as I initially wanted my blog to be the space outside of my work. But it’s part of me that I love to do statistics even outside of work time so I decided to start the first statistics blog post.

Today, I was done with the work I’m supposed to do a little bit earlier than I expected so I got more than one hour of free time. At the same time, I was craving for a pint of beer but I did not want to risk myself holding a can of beer in the workplace. What I did to combat this thirstiness for beer was looking into craft beer dataset in Kaggle. I knew that the dataset was there for a while for more than 6 months. I have been quite interested in the data but I have not taken into action to analyze it. Today, I was like ” why not?” and digged into it.

As a fan of going to art museums, visualizing is one of my favorite parts in R programming. You know, it’s a human nature to be attracted to beautiful facade.

Based on the craft beer data, I would like to show how to make pretty maps using ggplot package.

 

Step 1: Read the two datasets(you can find the data from the source part at the           bottom). Get ‘maps’ and ‘ggplot2’ packages 

Screen Shot 2017-08-10 at 8.33.16 PM

Step2:  The name for the  first column of the data set is just’ X’. Let’s change the column name to “brewery_id” just like in beer dataset . This is the way that you can change the column name.

Screen Shot 2017-08-10 at 8.35.30 PM

Since “brewery_id” are common variable now, we can merge two datasets into one using merge function. In this case, we are merging the dataset by brewery id. 

Step 3:  For this example, I would like to show the maps for each abv level. In other words, I will show the frequency for each abv level in the U.S map. Generally, the average abv level for beer is 4.5% . (from: https://www.livescience.com/32735-how-much-alcohol-is-in-my-drink.html). 

Screen Shot 2017-08-10 at 8.39.17 PM

In order to see the distribution, histogram is the best. Let’s see the distribution. This is the way to draw histogram using ggplot package. “beer.data$abv” is the variable that we would like to see its distribution. “col=”blue” is the line color.

Screen Shot 2017-08-10 at 8.50.14 PM

beer hist

There are highest number of observations around 0.045. Based on this, I classified “low” when the abv level is lower than 4.5%. Most of the data points are concentrated between 4.5% and 6% so I defined it as “medium“. There are quite datapoints over 6% so I define it as “high” for 6%-8%. For the abv level higher than 8%, I named it as “very high“.

Step 4: We want to create US map so let’s make the frequency table with state. Don’t forget to factorize state variable. R has a builit in function,”state.abb” without “DC”. DC is the heart of the US and there are people living there so let’s include it, too.

Screen Shot 2017-08-10 at 8.54.56 PM

Step 5: The last sentence in this picture brings map background. map_data function belongs to maps package. tolower function converts to lowercase. 

Screen Shot 2017-08-10 at 8.59.28 PM

Step 6: Now it’s much more complicated ggplot with map. FYI, I set the limit with longitude for x and latitude for y. labs is a great function to clear the name for x-axis, y-axis and the title. We want to see the frequency for each state so fill=freq.

Screen Shot 2017-08-10 at 9.02.13 PM

Screen Shot 2017-08-10 at 9.03.07 PM

As a result,

 

low alchoholmedium alcoholhigh alcoholvery high

We can see those West Coast People loves lower abv alcohol. Look at Oregon and California. They produce those low abv level beer more than the east coast for sure.  From Medium to Very High, Colorado produces highest number of beer. But California is consistently producing higher than average number of beers.  Other notable producers appear to be the four states on Lake Michigan, and Texas. Utah makes a large showing for low alcohol beers(No wonder). East coast seemed pretty reserved in this map.

Source: https://www.kaggle.com/nickhould/craft-cans