Path: blob/master/section-3-structured-data-projects/end-to-end-heart-disease-classification-video.ipynb
874 views
Predicting heart disease using machine learning
This notebook looks into using various Python-based machine learning and data science libraries in an attempt to build a machine learning model capable of predicting whether or not someone has heart disease based on their medical attributes.
We're going to take the following approach:
Problem definition
Data
Evaluation
Features
Modelling
Experimentation
1. Problem Definition
In a statement,
Given clinical parameters about a patient, can we predict whether or not they have heart disease?
2. Data
The original data came from the Cleavland data from the UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/datasets/heart+Disease
There is also a version of it available on Kaggle. https://www.kaggle.com/datasets/sumaiyatasmeem/heart-disease-classification-dataset
3. Evaluation
If we can reach 95% accuracy at predicting whether or not a patient has heart disease during the proof of concept, we'll pursue the project.
4. Features
This is where you'll get different information about each of the features in your data. You can do this via doing your own research (such as looking at the links above) or by talking to a subject matter expert (someone who knows about the dataset).
Create data dictionary
age - age in years
sex - (1 = male; 0 = female)
cp - chest pain type
0: Typical angina: chest pain related decrease blood supply to the heart
1: Atypical angina: chest pain not related to heart
2: Non-anginal pain: typically esophageal spasms (non heart related)
3: Asymptomatic: chest pain not showing signs of disease
trestbps - resting blood pressure (in mm Hg on admission to the hospital) anything above 130-140 is typically cause for concern
chol - serum cholestoral in mg/dl
serum = LDL + HDL + .2 * triglycerides
above 200 is cause for concern
fbs - (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)
'>126' mg/dL signals diabetes
restecg - resting electrocardiographic results
0: Nothing to note
1: ST-T Wave abnormality
can range from mild symptoms to severe problems
signals non-normal heart beat
2: Possible or definite left ventricular hypertrophy
Enlarged heart's main pumping chamber
thalach - maximum heart rate achieved
exang - exercise induced angina (1 = yes; 0 = no)
oldpeak - ST depression induced by exercise relative to rest looks at stress of heart during excercise unhealthy heart will stress more
slope - the slope of the peak exercise ST segment
0: Upsloping: better heart rate with excercise (uncommon)
1: Flatsloping: minimal change (typical healthy heart)
2: Downslopins: signs of unhealthy heart
ca - number of major vessels (0-3) colored by flourosopy
colored vessel means the doctor can see the blood passing through
the more blood movement the better (no clots)
thal - thalium stress result
1,3: normal
6: fixed defect: used to be defect but ok now
7: reversable defect: no proper blood movement when excercising
target - have disease or not (1=yes, 0=no) (= the predicted attribute)
Preparing the tools
We're going to use pandas, Matplotlib and NumPy for data analysis and manipulation.
Load data
Data Exploration (exploratory data analysis or EDA)
The goal here is to find out more about the data and become a subject matter export on the dataset you're working with.
What question(s) are you trying to solve?
What kind of data do we have and how do we treat different types?
What's missing from the data and how do you deal with it?
Where are the outliers and why should you care about them?
How can you add, change or remove features to get more out of your data?
Heart Disease Frequency according to Sex
Age vs. Max Heart Rate for Heart Disease
Heart Disease Frequency per Chest Pain Type
cp - chest pain type
0: Typical angina: chest pain related decrease blood supply to the heart
1: Atypical angina: chest pain not related to heart
2: Non-anginal pain: typically esophageal spasms (non heart related)
3: Asymptomatic: chest pain not showing signs of disease
5. Modelling
Now we've got our data split into training and test sets, it's time to build a machine learning model.
We'll train it (find the patterns) on the training set.
And we'll test it (use the patterns) on the test set.
We're going to try 3 different machine learning models:
Logistic Regression
K-Nearest Neighbours Classifier
Random Forest Classifier
Model Comparison
Now we've got a baseline model... and we know a model's first predictions aren't always what we should based our next steps off. What should we do?
Let's look at the following:
Hypyterparameter tuning
Feature importance
Confusion matrix
Cross-validation
Precision
Recall
F1 score
Classification report
ROC curve
Area under the curve (AUC)
Hyperparameter tuning (by hand)
Hyperparameter tuning with RandomizedSearchCV
We're going to tune:
LogisticRegression()
RandomForestClassifier()
... using RandomizedSearchCV
Now we've got hyperparameter grids setup for each of our models, let's tune them using RandomizedSearchCV...
Now we've tuned LogisticRegression(), let's do the same for RandomForestClassifier()...
Hyperparamter Tuning with GridSearchCV
Since our LogisticRegression model provides the best scores so far, we'll try and improve them again using GridSearchCV...
Evaluting our tuned machine learning classifier, beyond accuracy
ROC curve and AUC score
Confusion matrix
Classification report
Precision
Recall
F1-score
... and it would be great if cross-validation was used where possible.
To make comparisons and evaluate our trained model, first we need to make predictions.
Now we've got a ROC curve, an AUC metric and a confusion matrix, let's get a classification report as well as cross-validated precision, recall and f1-score.
Calculate evaluation metrics using cross-validation
We're going to calculate accuracy, precision, recall and f1-score of our model using cross-validation and to do so we'll be using cross_val_score()
.
Feature Importance
Feature importance is another as asking, "which features contributed most to the outcomes of the model and how did they contribute?"
Finding feature importance is different for each machine learning model. One way to find feature importance is to search for "(MODEL NAME) feature importance".
Let's find the feature importance for our LogisticRegression model...
slope - the slope of the peak exercise ST segment
0: Upsloping: better heart rate with excercise (uncommon)
1: Flatsloping: minimal change (typical healthy heart)
2: Downslopins: signs of unhealthy heart
6. Experimentation
If you haven't hit your evaluation metric yet... ask yourself...
Could you collect more data?
Could you try a better model? Like CatBoost or XGBoost?
Could you improve the current models? (beyond what we've done so far)
If your model is good enough (you have hit your evaluation metric) how would you export it and share it with others?