📚 The CoCalc Library - books, templates and other resources
License: OTHER
This notebook was prepared by Donne Martin. Source and license info is on GitHub.
Kaggle Machine Learning Competition: Predicting Titanic Survivors
Competition Site
Description
Evaluation
Data Set
Setup Imports and Variables
Explore the Data
Feature: Passenger Classes
Feature: Sex
Feature: Embarked
Feature: Age
Feature: Family Size
Final Data Preparation for Machine Learning
Data Wrangling Summary
Random Forest: Training
Random Forest: Predicting
Random Forest: Prepare for Kaggle Submission
Support Vector Machine: Training
Support Vector Machine: Predicting
Competition Site
Description, Evaluation, and Data Set taken from the competition site.
Description
The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.
Evaluation
The historical data has been split into two groups, a 'training set' and a 'test set'. For the training set, we provide the outcome ( 'ground truth' ) for each passenger. You will use this set to build your model to generate predictions for the test set.
For each passenger in the test set, you must predict whether or not they survived the sinking ( 0 for deceased, 1 for survived ). Your score is the percentage of passengers you correctly predict.
The Kaggle leaderboard has a public and private component. 50% of your predictions for the test set have been randomly assigned to the public leaderboard ( the same 50% for all users ). Your score on this public portion is what will appear on the leaderboard. At the end of the contest, we will reveal your score on the private 50% of the data, which will determine the final winner. This method prevents users from 'overfitting' to the leaderboard.
Data Set
File Name | Available Formats |
---|---|
train | .csv (59.76 kb) |
gendermodel | .csv (3.18 kb) |
genderclassmodel | .csv (3.18 kb) |
test | .csv (27.96 kb) |
gendermodel | .py (3.58 kb) |
genderclassmodel | .py (5.63 kb) |
myfirstforest | .py (3.99 kb) |
VARIABLE DESCRIPTIONS: survival Survival (0 = No; 1 = Yes) pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) name Name sex Sex age Age sibsp Number of Siblings/Spouses Aboard parch Number of Parents/Children Aboard ticket Ticket Number fare Passenger Fare cabin Cabin embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) SPECIAL NOTES: Pclass is a proxy for socio-economic status (SES) 1st ~ Upper; 2nd ~ Middle; 3rd ~ Lower Age is in Years; Fractional if Age less than One (1) If the Age is Estimated, it is in the form xx.5 With respect to the family relation variables (i.e. sibsp and parch) some relations were ignored. The following are the definitions used for sibsp and parch. Sibling: Brother, Sister, Stepbrother, or Stepsister of Passenger Aboard Titanic Spouse: Husband or Wife of Passenger Aboard Titanic (Mistresses and Fiances Ignored) Parent: Mother or Father of Passenger Aboard Titanic Child: Son, Daughter, Stepson, or Stepdaughter of Passenger Aboard Titanic Other family relatives excluded from this study include cousins, nephews/nieces, aunts/uncles, and in-laws. Some children travelled only with a nanny, therefore parch=0 for them. As well, some travelled with very close friends or neighbors in a village, however, the definitions do not support such relations.
Setup Imports and Variables
Explore the Data
Read the data:
View the data types of each column:
Type 'object' is a string for pandas, which poses problems with machine learning algorithms. If we want to use these as features, we'll need to convert these to number representations.
Get some basic information on the DataFrame:
Age, Cabin, and Embarked are missing values. Cabin has too many missing values, whereas we might be able to infer values for Age and Embarked.
Generate various descriptive statistics on the DataFrame:
Now that we have a general idea of the data set contents, we can dive deeper into each column. We'll be doing exploratory data analysis and cleaning data to setup 'features' we'll be using in our machine learning algorithms.
Plot a few features to get a better idea of each:
Next we'll explore various features to view their impact on survival rates.
Feature: Passenger Classes
From our exploratory data analysis in the previous section, we see there are three passenger classes: First, Second, and Third class. We'll determine which proportion of passengers survived based on their passenger class.
Generate a cross tab of Pclass and Survived:
Plot the cross tab:
We can see that passenger class seems to have a significant impact on whether a passenger survived. Those in First Class the highest chance for survival.
Feature: Sex
Gender might have also played a role in determining a passenger's survival rate. We'll need to map Sex from a string to a number to prepare it for machine learning algorithms.
Generate a mapping of Sex from a string to a number representation:
Transform Sex from a string to a number representation:
Plot a normalized cross tab for Sex_Val and Survived:
The majority of females survived, whereas the majority of males did not.
Next we'll determine whether we can gain any insights on survival rate by looking at both Sex and Pclass.
Count males and females in each Pclass:
Plot survival rate by Sex and Pclass:
The vast majority of females in First and Second class survived. Males in First class had the highest chance for survival.
Feature: Embarked
The Embarked column might be an important feature but it is missing a couple data points which might pose a problem for machine learning algorithms:
Prepare to map Embarked from a string to a number representation:
Transform Embarked from a string to a number representation to prepare it for machine learning algorithms:
Plot the histogram for Embarked_Val:
Since the vast majority of passengers embarked in 'S': 3, we assign the missing values in Embarked to 'S':
Verify we do not have any more NaNs for Embarked_Val:
Plot a normalized cross tab for Embarked_Val and Survived:
It appears those that embarked in location 'C': 1 had the highest rate of survival. We'll dig in some more to see why this might be the case. Below we plot a graphs to determine gender and passenger class makeup for each port:
Leaving Embarked as integers implies ordering in the values, which does not exist. Another way to represent Embarked without ordering is to create dummy variables:
Feature: Age
The Age column seems like an important feature--unfortunately it is missing many values. We'll need to fill in the missing values like we did with Embarked.
Filter to view missing Age values:
Determine the Age typical for each passenger class by Sex_Val. We'll use the median instead of the mean because the Age histogram seems to be right skewed.
Ensure AgeFill does not contain any missing values:
Plot a normalized cross tab for AgeFill and Survived:
Unfortunately, the graphs above do not seem to clearly show any insights. We'll keep digging further.
Plot AgeFill density by Pclass:
When looking at AgeFill density by Pclass, we see the first class passengers were generally older then second class passengers, which in turn were older than third class passengers. We've determined that first class passengers had a higher survival rate than second class passengers, which in turn had a higher survival rate than third class passengers.
In the first graph, we see that most survivors come from the 20's to 30's age ranges and might be explained by the following two graphs. The second graph shows most females are within their 20's. The third graph shows most first class passengers are within their 30's.
Feature: Family Size
Feature enginering involves creating new features or modifying existing features which might be advantageous to a machine learning algorithm.
Define a new feature FamilySize that is the sum of Parch (number of parents or children on board) and SibSp (number of siblings or spouses):
Plot a histogram of FamilySize:
Plot a histogram of AgeFill segmented by Survived:
Based on the histograms, it is not immediately obvious what impact FamilySize has on survival. The machine learning algorithms might benefit from this feature.
Additional features we might want to engineer might be related to the Name column, for example honorrary or pedestrian titles might give clues and better predictive power for a male's survival.
Final Data Preparation for Machine Learning
Many machine learning algorithms do not work on strings and they usually require the data to be in an array, not a DataFrame.
Show only the columns of type 'object' (strings):
Drop the columns we won't use:
Drop the following columns:
The Age column since we will be using the AgeFill column instead.
The SibSp and Parch columns since we will be using FamilySize instead.
The PassengerId column since it won't be used as a feature.
The Embarked_Val as we decided to use dummy variables instead.
Convert the DataFrame to a numpy array:
Data Wrangling Summary
Below is a summary of the data wrangling we performed on our training data set. We encapsulate this in a function since we'll need to do the same operations to our test set later.
Random Forest: Training
Create the random forest object:
Fit the training data and create the decision trees:
Random Forest: Predicting
Read the test data:
Note the test data does not contain the column 'Survived', we'll use our trained model to predict these values.
Take the decision trees and run it on the test data:
Random Forest: Prepare for Kaggle Submission
Create a DataFrame by combining the index from the test data with the output of predictions, then write the results to the output:
Evaluate Model Accuracy
Submitting to Kaggle will give you an accuracy score. It would be helpful to get an idea of accuracy without submitting to Kaggle.
We'll split our training data, 80% will go to "train" and 20% will go to "test":
Use the new training data to fit the model, predict, and get the accuracy score:
View the Confusion Matrix:
condition True | condition false | |
---|---|---|
prediction true | True Positive | False positive |
Prediction False | False Negative | True Negative |
Get the model score and confusion matrix:
Display the classification report: