Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
mrdbourke
GitHub Repository: mrdbourke/zero-to-mastery-ml
Path: blob/master/section-2-data-science-and-ml-tools/scikit-learn-workflow-example.ipynb
874 views
Kernel: Python 3

A Simple Scikit-Learn Classification Workflow

This notebook shows a breif workflow you might use with scikit-learn to build a machine learning model to classify whether or not a patient has heart disease.

It follows the diagram below:

Note: This workflow assumes your data is ready to be used with machine learning models (is numerical, has no missing values).

# Standard imports %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np

1. Get the data ready

# Import dataset heart_disease = pd.read_csv("../data/heart-disease.csv") # View the data heart_disease.head()

With this example, we're going to use all of the columns except the target column to predict the targert column.

In other words, using a patient's medical and demographic data to predict whether or not they have heart disease.

# Create X (all the feature columns) X = heart_disease.drop("target", axis=1) # Create y (the target column) y = heart_disease["target"]
# Split the data into training and test sets from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) # View the data shapes X_train.shape, X_test.shape, y_train.shape, y_test.shape
((227, 13), (76, 13), (227,), (76,))

2. Choose the model/estimator

You can do this using the Scikit-Learn machine learning map.

In Scikit-Learn, machine learning models are referred to as estimators.

In this case, since we're working on a classification problem, we've chosen the RandomForestClassifier estimator which is part of the ensembles module.

from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier()

3. Fit the model to the data and use it to make a prediction

A model will (attempt to) learn the patterns in a dataset by calling the fit() function on it and passing it the data.

model.fit(X_train, y_train)
/Users/daniel/Desktop/ml-course/work-in-progress/env/lib/python3.7/site-packages/sklearn/ensemble/forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22. "10 in version 0.20 to 100 in 0.22.", FutureWarning)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start=False)

Once a model has learned patterns in data, you can use them to make a prediction with the predict() function.

# Make predictions y_preds = model.predict(X_test)
# This will be in the same format as y_test y_preds
array([0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0])
X_test.loc[206]
age 59.0 sex 1.0 cp 0.0 trestbps 110.0 chol 239.0 fbs 0.0 restecg 0.0 thalach 142.0 exang 1.0 oldpeak 1.2 slope 1.0 ca 1.0 thal 3.0 Name: 206, dtype: float64
heart_disease.loc[206]
age 59.0 sex 1.0 cp 0.0 trestbps 110.0 chol 239.0 fbs 0.0 restecg 0.0 thalach 142.0 exang 1.0 oldpeak 1.2 slope 1.0 ca 1.0 thal 3.0 target 0.0 Name: 206, dtype: float64
# Make a prediction on a single sample (has to be array) model.predict(np.array(X_test.loc[206]).reshape(1, -1))
array([0])

4. Evaluate the model

A trained model/estimator can be evaluated by calling the score() function and passing it a collection of data.

# On the training set model.score(X_train, y_train)
0.9911894273127754
# On the test set (unseen) model.score(X_test, y_test)
0.75

5. Experiment to improve (hyperparameter tuning)

A model's first evaluation metrics aren't always its last. One way to improve a models predictions is with hyperparameter tuning.

# Try different numbers of estimators (n_estimators is a hyperparameter you can change) np.random.seed(42) for i in range(10, 100, 10): print(f"Trying model with {i} estimators...") model = RandomForestClassifier(n_estimators=i).fit(X_train, y_train) print(f"Model accruacy on test set: {model.score(X_test, y_test)}") print("")
Trying model with 10 estimators... Model accruacy on test set: 0.7236842105263158 Trying model with 20 estimators... Model accruacy on test set: 0.7368421052631579 Trying model with 30 estimators... Model accruacy on test set: 0.7368421052631579 Trying model with 40 estimators... Model accruacy on test set: 0.7368421052631579 Trying model with 50 estimators... Model accruacy on test set: 0.6973684210526315 Trying model with 60 estimators... Model accruacy on test set: 0.7631578947368421 Trying model with 70 estimators... Model accruacy on test set: 0.7631578947368421 Trying model with 80 estimators... Model accruacy on test set: 0.7631578947368421 Trying model with 90 estimators... Model accruacy on test set: 0.75

Note: It's best practice to test different hyperparameters with a validation set or cross-validation.

from sklearn.model_selection import cross_val_score # Try different numbers of estimators with cross-validation and no cross-validation np.random.seed(42) for i in range(10, 100, 10): print(f"Trying model with {i} estimators...") model = RandomForestClassifier(n_estimators=i).fit(X_train, y_train) print(f"Model accruacy on test set: {model.score(X_test, y_test)}") print(f"Cross-validation score: {np.mean(cross_val_score(model, X, y, cv=5)) * 100}%") print("")
Trying model with 10 estimators... Model accruacy on test set: 0.7236842105263158 Cross-validation score: 78.53551912568305% Trying model with 20 estimators... Model accruacy on test set: 0.75 Cross-validation score: 79.84699453551912% Trying model with 30 estimators... Model accruacy on test set: 0.7763157894736842 Cross-validation score: 80.50819672131148% Trying model with 40 estimators... Model accruacy on test set: 0.7631578947368421 Cross-validation score: 82.15300546448088% Trying model with 50 estimators... Model accruacy on test set: 0.7368421052631579 Cross-validation score: 81.1639344262295% Trying model with 60 estimators... Model accruacy on test set: 0.7631578947368421 Cross-validation score: 83.47540983606557% Trying model with 70 estimators... Model accruacy on test set: 0.7236842105263158 Cross-validation score: 81.83060109289617% Trying model with 80 estimators... Model accruacy on test set: 0.7631578947368421 Cross-validation score: 82.81420765027322% Trying model with 90 estimators... Model accruacy on test set: 0.75 Cross-validation score: 82.81967213114754%

6. Save a model for later use

A trained model can be exported and saved so it can be imported and used later. One way to save a model is using Python's pickle module.

import pickle # Save trained model to file pickle.dump(model, open("random_forest_model_1.pkl", "wb"))
# Load a saved model and make a prediction on a single example loaded_model = pickle.load(open("random_forest_model_1.pkl", "rb")) loaded_model.predict(np.array(X_test.loc[206]).reshape(1, -1))
array([0])