Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
mrdbourke
GitHub Repository: mrdbourke/zero-to-mastery-ml
Path: blob/master/section-3-structured-data-projects/end-to-end-heart-disease-classification-video.ipynb
874 views
Kernel: Python 3

Predicting heart disease using machine learning

This notebook looks into using various Python-based machine learning and data science libraries in an attempt to build a machine learning model capable of predicting whether or not someone has heart disease based on their medical attributes.

We're going to take the following approach:

  1. Problem definition

  2. Data

  3. Evaluation

  4. Features

  5. Modelling

  6. Experimentation

1. Problem Definition

In a statement,

Given clinical parameters about a patient, can we predict whether or not they have heart disease?

2. Data

The original data came from the Cleavland data from the UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/datasets/heart+Disease

There is also a version of it available on Kaggle. https://www.kaggle.com/datasets/sumaiyatasmeem/heart-disease-classification-dataset

3. Evaluation

If we can reach 95% accuracy at predicting whether or not a patient has heart disease during the proof of concept, we'll pursue the project.

4. Features

This is where you'll get different information about each of the features in your data. You can do this via doing your own research (such as looking at the links above) or by talking to a subject matter expert (someone who knows about the dataset).

Create data dictionary

  1. age - age in years

  2. sex - (1 = male; 0 = female)

  3. cp - chest pain type

    • 0: Typical angina: chest pain related decrease blood supply to the heart

    • 1: Atypical angina: chest pain not related to heart

    • 2: Non-anginal pain: typically esophageal spasms (non heart related)

    • 3: Asymptomatic: chest pain not showing signs of disease

  4. trestbps - resting blood pressure (in mm Hg on admission to the hospital) anything above 130-140 is typically cause for concern

  5. chol - serum cholestoral in mg/dl

    • serum = LDL + HDL + .2 * triglycerides

    • above 200 is cause for concern

  6. fbs - (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)

    • '>126' mg/dL signals diabetes

  7. restecg - resting electrocardiographic results

    • 0: Nothing to note

    • 1: ST-T Wave abnormality

      • can range from mild symptoms to severe problems

      • signals non-normal heart beat

    • 2: Possible or definite left ventricular hypertrophy

      • Enlarged heart's main pumping chamber

  8. thalach - maximum heart rate achieved

  9. exang - exercise induced angina (1 = yes; 0 = no)

  10. oldpeak - ST depression induced by exercise relative to rest looks at stress of heart during excercise unhealthy heart will stress more

  11. slope - the slope of the peak exercise ST segment

    • 0: Upsloping: better heart rate with excercise (uncommon)

    • 1: Flatsloping: minimal change (typical healthy heart)

    • 2: Downslopins: signs of unhealthy heart

  12. ca - number of major vessels (0-3) colored by flourosopy

    • colored vessel means the doctor can see the blood passing through

    • the more blood movement the better (no clots)

  13. thal - thalium stress result

    • 1,3: normal

    • 6: fixed defect: used to be defect but ok now

    • 7: reversable defect: no proper blood movement when excercising

  14. target - have disease or not (1=yes, 0=no) (= the predicted attribute)

Preparing the tools

We're going to use pandas, Matplotlib and NumPy for data analysis and manipulation.

# Import all the tools we need # Regular EDA (exploratory data analysis) and plotting libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # we want our plots to appear inside the notebook %matplotlib inline # Models from Scikit-Learn from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier # Model Evaluations from sklearn.model_selection import train_test_split, cross_val_score from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import confusion_matrix, classification_report from sklearn.metrics import precision_score, recall_score, f1_score from sklearn.metrics import plot_roc_curve

Load data

df = pd.read_csv("heart-disease.csv") df.shape # (rows, columns)
(303, 14)

Data Exploration (exploratory data analysis or EDA)

The goal here is to find out more about the data and become a subject matter export on the dataset you're working with.

  1. What question(s) are you trying to solve?

  2. What kind of data do we have and how do we treat different types?

  3. What's missing from the data and how do you deal with it?

  4. Where are the outliers and why should you care about them?

  5. How can you add, change or remove features to get more out of your data?

df.head()
df.tail()
# Let's find out how many of each class there df["target"].value_counts()
1 165 0 138 Name: target, dtype: int64
df["target"].value_counts().plot(kind="bar", color=["salmon", "lightblue"]);
Image in a Jupyter notebook
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 303 entries, 0 to 302 Data columns (total 14 columns): age 303 non-null int64 sex 303 non-null int64 cp 303 non-null int64 trestbps 303 non-null int64 chol 303 non-null int64 fbs 303 non-null int64 restecg 303 non-null int64 thalach 303 non-null int64 exang 303 non-null int64 oldpeak 303 non-null float64 slope 303 non-null int64 ca 303 non-null int64 thal 303 non-null int64 target 303 non-null int64 dtypes: float64(1), int64(13) memory usage: 33.3 KB
# Are there any missing values? df.isna().sum()
age 0 sex 0 cp 0 trestbps 0 chol 0 fbs 0 restecg 0 thalach 0 exang 0 oldpeak 0 slope 0 ca 0 thal 0 target 0 dtype: int64
df.describe()

Heart Disease Frequency according to Sex

df.sex.value_counts()
1 207 0 96 Name: sex, dtype: int64
# Compare target column with sex column pd.crosstab(df.target, df.sex)
# Create a plot of crosstab pd.crosstab(df.target, df.sex).plot(kind="bar", figsize=(10, 6), color=["salmon", "lightblue"]) plt.title("Heart Disease Frequency for Sex") plt.xlabel("0 = No Diesease, 1 = Disease") plt.ylabel("Amount") plt.legend(["Female", "Male"]); plt.xticks(rotation=0);
Image in a Jupyter notebook

Age vs. Max Heart Rate for Heart Disease

# Create another figure plt.figure(figsize=(10, 6)) # Scatter with postivie examples plt.scatter(df.age[df.target==1], df.thalach[df.target==1], c="salmon") # Scatter with negative examples plt.scatter(df.age[df.target==0], df.thalach[df.target==0], c="lightblue") # Add some helpful info plt.title("Heart Disease in function of Age and Max Heart Rate") plt.xlabel("Age") plt.ylabel("Max Heart Rate") plt.legend(["Disease", "No Disease"]);
Image in a Jupyter notebook
# Check the distribution of the age column with a histogram df.age.plot.hist();
Image in a Jupyter notebook

Heart Disease Frequency per Chest Pain Type

  1. cp - chest pain type

    • 0: Typical angina: chest pain related decrease blood supply to the heart

    • 1: Atypical angina: chest pain not related to heart

    • 2: Non-anginal pain: typically esophageal spasms (non heart related)

    • 3: Asymptomatic: chest pain not showing signs of disease

pd.crosstab(df.cp, df.target)
# Make the crosstab more visual pd.crosstab(df.cp, df.target).plot(kind="bar", figsize=(10, 6), color=["salmon", "lightblue"]) # Add some communication plt.title("Heart Disease Frequency Per Chest Pain Type") plt.xlabel("Chest Pain Type") plt.ylabel("Amount") plt.legend(["No Disease", "Disease"]) plt.xticks(rotation=0);
Image in a Jupyter notebook
df.head()
# Make a correlation matrix df.corr()
# Let's make our correlation matrix a little prettier corr_matrix = df.corr() fig, ax = plt.subplots(figsize=(15, 10)) ax = sns.heatmap(corr_matrix, annot=True, linewidths=0.5, fmt=".2f", cmap="YlGnBu"); bottom, top = ax.get_ylim() ax.set_ylim(bottom + 0.5, top - 0.5)
(14.0, 0.0)
Image in a Jupyter notebook

5. Modelling

df.head()
# Split data into X and y X = df.drop("target", axis=1) y = df["target"]
X
y
0 1 1 1 2 1 3 1 4 1 .. 298 0 299 0 300 0 301 0 302 0 Name: target, Length: 303, dtype: int64
# Split data into train and test sets np.random.seed(42) # Split into train & test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
X_train
y_train, len(y_train)
(132 1 202 0 196 0 75 1 176 0 .. 188 0 71 1 106 1 270 0 102 1 Name: target, Length: 242, dtype: int64, 242)

Now we've got our data split into training and test sets, it's time to build a machine learning model.

We'll train it (find the patterns) on the training set.

And we'll test it (use the patterns) on the test set.

We're going to try 3 different machine learning models:

  1. Logistic Regression

  2. K-Nearest Neighbours Classifier

  3. Random Forest Classifier

# Put models in a dictionary models = {"Logistic Regression": LogisticRegression(), "KNN": KNeighborsClassifier(), "Random Forest": RandomForestClassifier()} # Create a function to fit and score models def fit_and_score(models, X_train, X_test, y_train, y_test): """ Fits and evaluates given machine learning models. models : a dict of differetn Scikit-Learn machine learning models X_train : training data (no labels) X_test : testing data (no labels) y_train : training labels y_test : test labels """ # Set random seed np.random.seed(42) # Make a dictionary to keep model scores model_scores = {} # Loop through models for name, model in models.items(): # Fit the model to the data model.fit(X_train, y_train) # Evaluate the model and append its score to model_scores model_scores[name] = model.score(X_test, y_test) return model_scores
model_scores = fit_and_score(models=models, X_train=X_train, X_test=X_test, y_train=y_train, y_test=y_test) model_scores
/Users/daniel/Desktop/ml-course/heart-disease-project/env/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html. Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
{'Logistic Regression': 0.8852459016393442, 'KNN': 0.6885245901639344, 'Random Forest': 0.8360655737704918}

Model Comparison

model_compare = pd.DataFrame(model_scores, index=["accuracy"]) model_compare.T.plot.bar();
Image in a Jupyter notebook

Now we've got a baseline model... and we know a model's first predictions aren't always what we should based our next steps off. What should we do?

Let's look at the following:

  • Hypyterparameter tuning

  • Feature importance

  • Confusion matrix

  • Cross-validation

  • Precision

  • Recall

  • F1 score

  • Classification report

  • ROC curve

  • Area under the curve (AUC)

Hyperparameter tuning (by hand)

# Let's tune KNN train_scores = [] test_scores = [] # Create a list of differnt values for n_neighbors neighbors = range(1, 21) # Setup KNN instance knn = KNeighborsClassifier() # Loop through different n_neighbors for i in neighbors: knn.set_params(n_neighbors=i) # Fit the algorithm knn.fit(X_train, y_train) # Update the training scores list train_scores.append(knn.score(X_train, y_train)) # Update the test scores list test_scores.append(knn.score(X_test, y_test))
train_scores
[1.0, 0.8099173553719008, 0.7727272727272727, 0.743801652892562, 0.7603305785123967, 0.7520661157024794, 0.743801652892562, 0.7231404958677686, 0.71900826446281, 0.6942148760330579, 0.7272727272727273, 0.6983471074380165, 0.6900826446280992, 0.6942148760330579, 0.6859504132231405, 0.6735537190082644, 0.6859504132231405, 0.6652892561983471, 0.6818181818181818, 0.6694214876033058]
test_scores
[0.6229508196721312, 0.639344262295082, 0.6557377049180327, 0.6721311475409836, 0.6885245901639344, 0.7213114754098361, 0.7049180327868853, 0.6885245901639344, 0.6885245901639344, 0.7049180327868853, 0.7540983606557377, 0.7377049180327869, 0.7377049180327869, 0.7377049180327869, 0.6885245901639344, 0.7213114754098361, 0.6885245901639344, 0.6885245901639344, 0.7049180327868853, 0.6557377049180327]
plt.plot(neighbors, train_scores, label="Train score") plt.plot(neighbors, test_scores, label="Test score") plt.xticks(np.arange(1, 21, 1)) plt.xlabel("Number of neighbors") plt.ylabel("Model score") plt.legend() print(f"Maximum KNN score on the test data: {max(test_scores)*100:.2f}%")
Maximum KNN score on the test data: 75.41%
Image in a Jupyter notebook

Hyperparameter tuning with RandomizedSearchCV

We're going to tune:

  • LogisticRegression()

  • RandomForestClassifier()

... using RandomizedSearchCV

# Create a hyperparameter grid for LogisticRegression log_reg_grid = {"C": np.logspace(-4, 4, 20), "solver": ["liblinear"]} # Create a hyperparameter grid for RandomForestClassifier rf_grid = {"n_estimators": np.arange(10, 1000, 50), "max_depth": [None, 3, 5, 10], "min_samples_split": np.arange(2, 20, 2), "min_samples_leaf": np.arange(1, 20, 2)}

Now we've got hyperparameter grids setup for each of our models, let's tune them using RandomizedSearchCV...

# Tune LogisticRegression np.random.seed(42) # Setup random hyperparameter search for LogisticRegression rs_log_reg = RandomizedSearchCV(LogisticRegression(), param_distributions=log_reg_grid, cv=5, n_iter=20, verbose=True) # Fit random hyperparameter search model for LogisticRegression rs_log_reg.fit(X_train, y_train)
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers. [Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.6s finished
RandomizedSearchCV(cv=5, error_score=nan, estimator=LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=100, multi_class='auto', n_jobs=None, penalty='l2', random_state=None, solver='lbfgs', tol=0.0001, verbose=0, warm_start=False), iid='deprecated', n_iter=20, n_jobs=None, param_distributions={'C':... 4.83293024e-03, 1.27427499e-02, 3.35981829e-02, 8.85866790e-02, 2.33572147e-01, 6.15848211e-01, 1.62377674e+00, 4.28133240e+00, 1.12883789e+01, 2.97635144e+01, 7.84759970e+01, 2.06913808e+02, 5.45559478e+02, 1.43844989e+03, 3.79269019e+03, 1.00000000e+04]), 'solver': ['liblinear']}, pre_dispatch='2*n_jobs', random_state=None, refit=True, return_train_score=False, scoring=None, verbose=True)
rs_log_reg.best_params_
{'solver': 'liblinear', 'C': 0.23357214690901212}
rs_log_reg.score(X_test, y_test)
0.8852459016393442

Now we've tuned LogisticRegression(), let's do the same for RandomForestClassifier()...

# Setup random seed np.random.seed(42) # Setup random hyperparameter search for RandomForestClassifier rs_rf = RandomizedSearchCV(RandomForestClassifier(), param_distributions=rf_grid, cv=5, n_iter=20, verbose=True) # Fit random hyperparameter search model for RandomForestClassifier() rs_rf.fit(X_train, y_train)
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers. [Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.1min finished
RandomizedSearchCV(cv=5, error_score=nan, estimator=RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None, criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs... param_distributions={'max_depth': [None, 3, 5, 10], 'min_samples_leaf': array([ 1, 3, 5, 7, 9, 11, 13, 15, 17, 19]), 'min_samples_split': array([ 2, 4, 6, 8, 10, 12, 14, 16, 18]), 'n_estimators': array([ 10, 60, 110, 160, 210, 260, 310, 360, 410, 460, 510, 560, 610, 660, 710, 760, 810, 860, 910, 960])}, pre_dispatch='2*n_jobs', random_state=None, refit=True, return_train_score=False, scoring=None, verbose=True)
# Find the best hyperparameters rs_rf.best_params_
{'n_estimators': 210, 'min_samples_split': 4, 'min_samples_leaf': 19, 'max_depth': 3}
# Evaluate the randomized search RandomForestClassifier model rs_rf.score(X_test, y_test)
0.8688524590163934

Hyperparamter Tuning with GridSearchCV

Since our LogisticRegression model provides the best scores so far, we'll try and improve them again using GridSearchCV...

# Different hyperparameters for our LogisticRegression model log_reg_grid = {"C": np.logspace(-4, 4, 30), "solver": ["liblinear"]} # Setup grid hyperparameter search for LogisticRegression gs_log_reg = GridSearchCV(LogisticRegression(), param_grid=log_reg_grid, cv=5, verbose=True) # Fit grid hyperparameter search model gs_log_reg.fit(X_train, y_train);
Fitting 5 folds for each of 30 candidates, totalling 150 fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers. [Parallel(n_jobs=1)]: Done 150 out of 150 | elapsed: 1.1s finished
# Check the best hyperparmaters gs_log_reg.best_params_
{'C': 0.20433597178569418, 'solver': 'liblinear'}
# Evaluate the grid search LogisticRegression model gs_log_reg.score(X_test, y_test)
0.8852459016393442

Evaluting our tuned machine learning classifier, beyond accuracy

  • ROC curve and AUC score

  • Confusion matrix

  • Classification report

  • Precision

  • Recall

  • F1-score

... and it would be great if cross-validation was used where possible.

To make comparisons and evaluate our trained model, first we need to make predictions.

# Make predictions with tuned model y_preds = gs_log_reg.predict(X_test)
y_preds
array([0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0])
y_test
179 0 228 0 111 1 246 0 60 1 .. 249 0 104 1 300 0 193 0 184 0 Name: target, Length: 61, dtype: int64
# Plot ROC curve and calculate and calculate AUC metric plot_roc_curve(gs_log_reg, X_test, y_test)
<sklearn.metrics._plot.roc_curve.RocCurveDisplay at 0x1a1fab0748>
Image in a Jupyter notebook
# Confusion matrix print(confusion_matrix(y_test, y_preds))
[[25 4] [ 3 29]]
sns.set(font_scale=1.5) def plot_conf_mat(y_test, y_preds): """ Plots a nice looking confusion matrix using Seaborn's heatmap() """ fig, ax = plt.subplots(figsize=(3, 3)) ax = sns.heatmap(confusion_matrix(y_test, y_preds), annot=True, cbar=False) plt.xlabel("True label") plt.ylabel("Predicted label") bottom, top = ax.get_ylim() ax.set_ylim(bottom + 0.5, top - 0.5) plot_conf_mat(y_test, y_preds)
Image in a Jupyter notebook

Now we've got a ROC curve, an AUC metric and a confusion matrix, let's get a classification report as well as cross-validated precision, recall and f1-score.

print(classification_report(y_test, y_preds))
precision recall f1-score support 0 0.89 0.86 0.88 29 1 0.88 0.91 0.89 32 accuracy 0.89 61 macro avg 0.89 0.88 0.88 61 weighted avg 0.89 0.89 0.89 61

Calculate evaluation metrics using cross-validation

We're going to calculate accuracy, precision, recall and f1-score of our model using cross-validation and to do so we'll be using cross_val_score().

# Check best hyperparameters gs_log_reg.best_params_
{'C': 0.20433597178569418, 'solver': 'liblinear'}
# Create a new classifier with best parameters clf = LogisticRegression(C=0.20433597178569418, solver="liblinear")
# Cross-validated accuracy cv_acc = cross_val_score(clf, X, y, cv=5, scoring="accuracy") cv_acc
array([0.81967213, 0.90163934, 0.86885246, 0.88333333, 0.75 ])
cv_acc = np.mean(cv_acc) cv_acc
0.8446994535519124
# Cross-validated precision cv_precision = cross_val_score(clf, X, y, cv=5, scoring="precision") cv_precision=np.mean(cv_precision) cv_precision
0.8207936507936507
# Cross-validated recall cv_recall = cross_val_score(clf, X, y, cv=5, scoring="recall") cv_recall = np.mean(cv_recall) cv_recall
0.9212121212121213
# Cross-validated f1-score cv_f1 = cross_val_score(clf, X, y, cv=5, scoring="f1") cv_f1 = np.mean(cv_f1) cv_f1
0.8673007976269721
# Visualize cross-validated metrics cv_metrics = pd.DataFrame({"Accuracy": cv_acc, "Precision": cv_precision, "Recall": cv_recall, "F1": cv_f1}, index=[0]) cv_metrics.T.plot.bar(title="Cross-validated classification metrics", legend=False);
Image in a Jupyter notebook

Feature Importance

Feature importance is another as asking, "which features contributed most to the outcomes of the model and how did they contribute?"

Finding feature importance is different for each machine learning model. One way to find feature importance is to search for "(MODEL NAME) feature importance".

Let's find the feature importance for our LogisticRegression model...

# Fit an instance of LogisticRegression clf = LogisticRegression(C=0.20433597178569418, solver="liblinear") clf.fit(X_train, y_train);
# Check coef_ clf.coef_
array([[ 0.00316727, -0.86044582, 0.66067073, -0.01156993, -0.00166374, 0.04386131, 0.31275787, 0.02459361, -0.60413038, -0.56862852, 0.45051617, -0.63609863, -0.67663375]])
df.head()
# Match coef's of features to columns feature_dict = dict(zip(df.columns, list(clf.coef_[0]))) feature_dict
{'age': 0.0031672721856887734, 'sex': -0.860445816920919, 'cp': 0.6606707303492849, 'trestbps': -0.011569930902919925, 'chol': -0.001663741604035976, 'fbs': 0.04386130751482091, 'restecg': 0.3127578715206996, 'thalach': 0.02459360818122666, 'exang': -0.6041303799858143, 'oldpeak': -0.5686285194546157, 'slope': 0.4505161679452401, 'ca': -0.6360986316921434, 'thal': -0.6766337521354281}
# Visualize feature importance feature_df = pd.DataFrame(feature_dict, index=[0]) feature_df.T.plot.bar(title="Feature Importance", legend=False);
Image in a Jupyter notebook
pd.crosstab(df["sex"], df["target"])
pd.crosstab(df["slope"], df["target"])

slope - the slope of the peak exercise ST segment

  • 0: Upsloping: better heart rate with excercise (uncommon)

  • 1: Flatsloping: minimal change (typical healthy heart)

  • 2: Downslopins: signs of unhealthy heart

6. Experimentation

If you haven't hit your evaluation metric yet... ask yourself...

  • Could you collect more data?

  • Could you try a better model? Like CatBoost or XGBoost?

  • Could you improve the current models? (beyond what we've done so far)

  • If your model is good enough (you have hit your evaluation metric) how would you export it and share it with others?