Path: blob/master/section-3-structured-data-projects/end-to-end-bluebook-bulldozer-price-regression-v1.ipynb
874 views
🚜 Predicting the Sale Price of Bulldozers using Machine Learning
In this notebook, we're going to go through an example machine learning project to use the characteristics of bulldozers and their past sales prices to predict the sale price of future bulldozers based on their characteristics.
Inputs: Bulldozer characteristics such as make year, base model, model series, state of sale (e.g. which US state was it sold in), drive system and more.
Outputs: Bulldozer sale price (in USD).
Since we're trying to predict a number, this kind of problem is known as a regression problem.
And since we're going to predicting results with a time component (predicting future sales based on past sales), this is also known as a time series or forecasting problem.
The data and evaluation metric we'll be using (root mean square log error or RMSLE) is from the Kaggle Bluebook for Bulldozers competition.
The techniques used in here have been inspired and adapted from the fast.ai machine learning course.
Overview
Since we already have a dataset, we'll approach the problem with the following machine learning modelling framework.
| | |:--😐 | 6 Step Machine Learning Modelling Framework (read more) |
To work through these topics, we'll use pandas, Matplotlib and NumPy for data analysis, as well as, Scikit-Learn for machine learning and modelling tasks.
| | |:--😐 | Tools that can be used for each step of the machine learning modelling process. |
We'll work through each step and by the end of the notebook, we'll have a trained machine learning model which predicts the sale price of a bulldozer given different characteristics about it.
6 Step Machine Learning Framework
1. Problem Definition
For this dataset, the problem we're trying to solve, or better, the question we're trying to answer is,
How well can we predict the future sale price of a bulldozer, given its characteristics previous examples of how much similar bulldozers have been sold for?
2. Data
Looking at the dataset from Kaggle, you can you it's a time series problem. This means there's a time attribute to dataset.
In this case, it's historical sales data of bulldozers. Including things like, model type, size, sale date and more.
There are 3 datasets:
Train.csv - Historical bulldozer sales examples up to 2011 (close to 400,000 examples with 50+ different attributes, including
SalePrice
which is the target variable).Valid.csv - Historical bulldozer sales examples from January 1 2012 to April 30 2012 (close to 12,000 examples with the same attributes as Train.csv).
Test.csv - Historical bulldozer sales examples from May 1 2012 to November 2012 (close to 12,000 examples but missing the
SalePrice
attribute, as this is what we'll be trying to predict).
Note: You can download the dataset
bluebook-for-bulldozers
dataset directly from Kaggle. Alternatively, you can also download it directly from the course GitHub.
3. Evaluation
For this problem, Kaggle has set the evaluation metric to being root mean squared log error (RMSLE). As with many regression evaluations, the goal will be to get this value as low as possible.
To see how well our model is doing, we'll calculate the RMSLE and then compare our results to others on the Kaggle leaderboard.
4. Features
Features are different parts of the data. During this step, you'll want to start finding out what you can about the data.
One of the most common ways to do this is to create a data dictionary.
For this dataset, Kaggle provides a data dictionary which contains information about what each attribute of the dataset means.
For example:
Variable Name | Description | Variable Type |
---|---|---|
SalesID | unique identifier of a particular sale of a machine at auction | Independent variable |
MachineID | identifier for a particular machine; machines may have multiple sales | Independent variable |
ModelID | identifier for a unique machine model (i.e. fiModelDesc) | Independent variable |
datasource | source of the sale record; some sources are more diligent about reporting attributes of the machine than others. Note that a particular datasource may report on multiple auctioneerIDs. | Independent variable |
auctioneerID | identifier of a particular auctioneer, i.e. company that sold the machine at auction. Not the same as datasource. | Independent variable |
YearMade | year of manufacturer of the Machine | Independent variable |
MachineHoursCurrentMeter | current usage of the machine in hours at time of sale (saledate); null or 0 means no hours have been reported for that sale | Independent variable |
UsageBand | value (low, medium, high) calculated comparing this particular Machine-Sale hours to average usage for the fiBaseModel; e.g. 'Low' means this machine has fewer hours given its lifespan relative to the average of fiBaseModel. | Independent variable |
Saledate | time of sale | Independent variable |
fiModelDesc | Description of a unique machine model (see ModelID); concatenation of fiBaseModel & fiSecondaryDesc & fiModelSeries & fiModelDescriptor | Independent variable |
State | US State in which sale occurred | Independent variable |
Drive_System | machine configuration; typically describes whether 2 or 4 wheel drive | Independent variable |
Enclosure | machine configuration - does the machine have an enclosed cab or not | Independent variable |
Forks | machine configuration - attachment used for lifting | Independent variable |
Pad_Type | machine configuration - type of treads a crawler machine uses | Independent variable |
Ride_Control | machine configuration - optional feature on loaders to make the ride smoother | Independent variable |
Transmission | machine configuration - describes type of transmission; typically automatic or manual | Independent variable |
... | ... | ... |
SalePrice | cost of sale in USD | Target/dependent variable |
You can download the full version of this file directly from the Kaggle competition page (account required) or view it on Google Sheets.
With all of this being known, let's get started!
First, we'll import the dataset and start exploring. Since we know the evaluation metric we're trying to minimise, our first goal will be building a baseline model and seeing how it stacks up against the competition.
1. Importing the data and preparing it for modelling
First thing is first, let's get the libraries we need imported and the data we'll need for the project.
We'll start by importing pandas, NumPy and matplotlib.
Now we've got our tools for data analysis ready, we can import the data and start to explore it.
For this project, I've downloaded the data from Kaggle and stored it on the course GitHub under the file path ../data/bluebook-for-bulldozers
.
We can write some code to check if the files are available locally (on our computer) and if not, we can download them.
Note: If you're running this notebook on Google Colab, the code below will enable you to download the dataset programmatically. Just beware that each time Google Colab shuts down, the data will have to be redownloaded. There's also an example Google Colab notebook showing how to download the data programmatically.
Dataset downloaded!
Let's check what files are available.
You can explore each of these files individually or read about them on the Kaggle Competition page.
For now, the main file we're interested in is TrainAndValid.csv
(this is also a combination of Train.csv
and Valid.csv
), this is a combination of the training and validation datasets.
The training data (
Train.csv
) contains sale data from 1989 up to the end of 2011.The validation data (
Valid.csv
) contains sale data from January 1, 2012 - April 30, 2012.The test data (
Test.csv
) contains sale data from May 1, 2012 - November 2012.
We'll use the training data to train our model to predict the sale price of bulldozers, we'll then validate its performance on the validation data to see if our model can be improved in any way. Finally, we'll evaluate our best model on the test dataset.
But more on this later on.
Let's import the TrainAndValid.csv
file and turn it into a pandas DataFrame.
Wonderful! We've got our DataFrame ready to explore.
You might see a warning appear in the form:
DtypeWarning: Columns (13,39,40,41) have mixed types. Specify dtype option on import or set low_memory=False. df = pd.read_csv("../data/bluebook-for-bulldozers/TrainAndValid.csv")
This is just saying that some of our columns have multiple/mixed data types. For example, a column may contain strings but also contain integers. This is okay for now and can be addressed later on if necessary.
How about we get some information about our DataFrame?
Woah! Over 400,000 entries!
That's a much larger dataset than what we've worked with before.
One thing you might have noticed is that the saledate
column value is being treated as a Python object (it's okay if you didn't notice, these things take practice).
When the Dtype
is object
, it's saying that it's a string.
However, when look at it...
We can see that these object
's are in the form of dates.
Since we're working on a time series problem (a machine learning problem with a time component), it's probably worth it to turn these strings into Python datetime
objects.
Before we do, let's try visualize our saledate
column against our SalePrice
column.
To do so, we can create a scatter plot.
And to prevent our plot from being too big, how about we visualize the first 1000 values?
Hmm... looks like the x-axis is quite crowded.
Maybe we can fix this by turning the saledate
column into datetime
format.
Good news is that is looks like our SalePrice
column is already in float64
format so we can view its distribution directly from the DataFrame using a histogram plot.
1.1 Parsing dates
When working with time series data, it's a good idea to make sure any date data is the format of a datetime object (a Python data type which encodes specific information about dates).
We can tell pandas which columns to read in as dates by setting the parse_dates
parameter in pd.read_csv
.
Once we've imported our CSV with the saledate
column parsed, we can view information about our DataFrame again with df.info()
.
Nice!
Looks like our saledate
column is now of type datetime64[ns]
, a NumPy-specific datetime format with high precision.
Since pandas works well with NumPy, we can keep it in this format.
How about we view a few samples from our SaleDate
column again?
Beautiful! That's looking much better already.
We'll see how having our dates in this format is really helpful later on.
For now, how about we visualize our saledate
column against our SalePrice
column again?
1.2 Sorting our DataFrame by saledate
Now we've formatted our saledate
column to be NumPy datetime64[ns]
objects, we can use built-in pandas methods such as sort_values
to sort our DataFrame by date.
And considering this is a time series problem, sorting our DataFrame by date has the added benefit of making sure our data is sequential.
In other words, we want to use examples from the past (example sale prices from previous dates) to try and predict future bulldozer sale prices.
Let's use the pandas.DataFrame.sort_values
method to sort our DataFrame by saledate
in ascending order.
Nice!
Looks like our older samples are now coming first and the newer samples are towards the end of the DataFrame.
1.3 Adding extra features to our DataFrame
One way to potentially increase the predictive power of our data is to enhance it with more features.
This practice is known as feature engineering, taking existing features and using them to create more/different features.
There is no set in stone way to do feature engineering and often it takes quite a bit of practice/exploration/experimentation to figure out what might work and what won't.
For now, we'll use our saledate
column to add extra features such as:
Year of sale
Month of sale
Day of sale
Day of week sale (e.g. Monday = 1, Tuesday = 2)
Day of year sale (e.g. January 1st = 1, January 2nd = 2)
Since we're going to be manipulating the data, we'll make a copy of the original DataFrame and perform our changes there.
This will keep the original DataFrame in tact if we need it again.
Because we imported the data using read_csv()
and we asked pandas to parse the dates using parase_dates=["saledate"]
, we can now access the different datetime attributes of the saledate
column.
Let's use these attributes to add a series of different feature columns to our dataset.
After we've added these extra columns, we can remove the original saledate
column as its information will be dispersed across these new columns.
We could add more of these style of columns, such as, whether it was the start or end of a quarter (the sale being at the end of a quarter may bye influenced by things such as quarterly budgets) but these will do for now.
Challenge: See what other datetime attributes you can add to
df_tmp
using a similar technique to what we've used above. Hint: check the bottom of the pandas.DatetimeIndex docs.
How about we view some of our newly created columns?
Cool!
Now we've broken our saledate
column into columns/features, we can perform further exploratory analysis such as visualizing the SalePrice
against the saleMonth
.
How about we view the first 10,000 samples (we could also randomly select 10,000 samples too) to see if reveals anything about which month has the highest sales?
Hmm... doesn't look like there's too much conclusive evidence here about which month has the highest sales value.
How about we plot the median sale price of each month?
We can do so by grouping on the saleMonth
column with pandas.DataFrame.groupby
and then getting the median of the SalePrice
column.
Ohhh it looks like the median sale prices of January and February (months 1 and 2) are quite a bit higher than the other months of the year.
Could this be because of New Year budget spending?
Perhaps... but this would take a bit more investigation.
In the meantime, there are many other values we could look further into.
TK - 1.4 Inspect values of other columns
When first exploring a new problem, it's often a good idea to become as familiar with the data as you can.
Of course, with a dataset that has over 400,000 samples, it's unlikely you'll ever get through every sample.
But that's where the power of data analysis and machine learning can help.
We can use pandas to aggregate thousands of samples into smaller more managable pieces.
And as we'll see later on, we can use machine learning models to model the data and then later inspect which features the model thought were most important.
How about we see which states sell the most bulldozers?
Woah! Looks like Flordia sells a fair few bulldozers.
How about we go even further and group our samples by state
and then find the median SalePrice
per state?
We also compare this to the median SalePrice
for all samples.
Now that's a nice looking figure!
Interestingly Florida has the most sales and the median sale price is above the overall median of all other states.
And if you had a bulldozer and were chasing the highest sale price, the data would reveal that perhaps selling in South Dakota would be your best bet.
Perhaps bulldozers are in higher demand in South Dakota because of a building or mining boom?
Answering this would require a bit more research.
But what we're doing here is slowly building up a mental model of our data.
So that if we saw an example in the future, we could compare its values to the ones we've already seen.
2. Model driven exploration
We've performed a small Exploratory Data Analysis (EDA) as well as enriched it with some datetime
attributes, now let's try to model it.
Why model so early?
Well, we know the evaluation metric (root mean squared log error or RMSLE) we're heading towards.
We could spend more time doing EDA, finding more out about the data ourselves but what we'll do instead is use a machine learning model to help us do EDA whilst simultaneously working towards the best evaluation metric we can get.
Remember, one of the biggest goals of starting any new machine learning project is reducing the time between experiments.
Following the Scikit-Learn machine learning map and taking into account the fact we've got over 100,000 examples, we find a sklearn.linear_model.SGDRegressor()
or a sklearn.ensemble.RandomForestRegressor()
model might be a good candidate.
Since we're worked with the Random Forest algorithm before (on the heart disease classification problem), let's try it out on our regression problem.
Note: We're trying just one model here for now. But you can try many other kinds of models from the Scikit-Learn library, they mostly work with a similar API. There are even libraries such as
LazyPredict
which will try many models simultaneously and return a table with the results.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/var/folders/c4/qj4gdk190td18bqvjjh0p3p00000gn/T/ipykernel_21543/2824176890.py in ?()
1 # This won't work since we've got missing numbers and categories
2 from sklearn.ensemble import RandomForestRegressor
3
4 model = RandomForestRegressor(n_jobs=-1)
----> 5 model.fit(X=df_tmp.drop("SalePrice", axis=1), # use all columns except SalePrice as X input
6 y=df_tmp.SalePrice) # use SalePrice column as y input
~/miniforge3/envs/ai/lib/python3.11/site-packages/sklearn/base.py in ?(estimator, *args, **kwargs)
1469 skip_parameter_validation=(
1470 prefer_skip_nested_validation or global_skip_validation
1471 )
1472 ):
-> 1473 return fit_method(estimator, *args, **kwargs)
~/miniforge3/envs/ai/lib/python3.11/site-packages/sklearn/ensemble/_forest.py in ?(self, X, y, sample_weight)
359 # Validate or convert input data
360 if issparse(y):
361 raise ValueError("sparse multilabel-indicator for y is not supported.")
362
--> 363 X, y = self._validate_data(
364 X,
365 y,
366 multi_output=True,
~/miniforge3/envs/ai/lib/python3.11/site-packages/sklearn/base.py in ?(self, X, y, reset, validate_separately, cast_to_ndarray, **check_params)
646 if "estimator" not in check_y_params:
647 check_y_params = {**default_check_params, **check_y_params}
648 y = check_array(y, input_name="y", **check_y_params)
649 else:
--> 650 X, y = check_X_y(X, y, **check_params)
651 out = X, y
652
653 if not no_val_X and check_params.get("ensure_2d", True):
~/miniforge3/envs/ai/lib/python3.11/site-packages/sklearn/utils/validation.py in ?(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_writeable, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, estimator)
1297 raise ValueError(
1298 f"{estimator_name} requires y to be passed, but the target y is None"
1299 )
1300
-> 1301 X = check_array(
1302 X,
1303 accept_sparse=accept_sparse,
1304 accept_large_sparse=accept_large_sparse,
~/miniforge3/envs/ai/lib/python3.11/site-packages/sklearn/utils/validation.py in ?(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_writeable, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator, input_name)
1009 )
1010 array = xp.astype(array, dtype, copy=False)
1011 else:
1012 array = _asarray_with_order(array, order=order, dtype=dtype, xp=xp)
-> 1013 except ComplexWarning as complex_warning:
1014 raise ValueError(
1015 "Complex data not supported\n{}\n".format(array)
1016 ) from complex_warning
~/miniforge3/envs/ai/lib/python3.11/site-packages/sklearn/utils/_array_api.py in ?(array, dtype, order, copy, xp, device)
747 # Use NumPy API to support order
748 if copy is True:
749 array = numpy.array(array, order=order, dtype=dtype)
750 else:
--> 751 array = numpy.asarray(array, order=order, dtype=dtype)
752
753 # At this point array is a NumPy ndarray. We convert it to an array
754 # container that is consistent with the input's namespace.
~/miniforge3/envs/ai/lib/python3.11/site-packages/pandas/core/generic.py in ?(self, dtype, copy)
2149 def __array__(
2150 self, dtype: npt.DTypeLike | None = None, copy: bool_t | None = None
2151 ) -> np.ndarray:
2152 values = self._values
-> 2153 arr = np.asarray(values, dtype=dtype)
2154 if (
2155 astype_is_view(values.dtype, arr.dtype)
2156 and using_copy_on_write()
ValueError: could not convert string to float: 'Low'
Oh no!
When we try to fit our model to the data, we get a value error similar to:
ValueError: could not convert string to float: 'Low'
The problem here is that some of the features of our data are in string format and machine learning models love numbers.
Not to mention some of our samples have missing values.
And typically, machine learning models require all data to be in numerical format as well as all missing values to be filled.
Let's start to fix this by inspecting the different datatypes in our DataFrame.
We can do so using the pandas.DataFrame.info()
method, this will give us the different datatypes as well as how many non-null (a null value is generally a missing value) in our df_tmp
DataFrame.
Note: There are some ML models such as
sklearn.ensemble.HistGradientBoostingRegressor
, CatBoost and XGBoost which can handle missing values, however, I'll leave exploring each of these as extra-curriculum/extensions.
Ok, it seems as though we've got a fair few different datatypes.
There are int64
types such as MachineID
.
There are float64
types such as SalePrice
.
And there are object
(the object
dtype can hold any Python object, including strings) types such as UseageBand
.
Resource: You can see a list of all the pandas dtypes in the pandas user guide.
How about we find out how many missing values are in each column?
We can do so using pandas.DataFrame.isna()
(isna
stands for 'is null or NaN') which will return a boolean True
/False
if a value is missing (True
if missing, False
if not).
Let's start by checking the missing values in the head of our DataFrame.
Alright it seems as though we've got some missing values in the MachineHoursCurrentMeter
as well as the UsageBand
and a few other columns.
But so far we've only viewed the first few rows.
It'll be very time consuming to go through each row one by one so how about we get the total missing values per column?
We can do so by calling .isna()
on the whole DataFrame and then chaining it together with .sum()
.
Doing so will give us the total True
/False
values in a given column (when summing, True
= 1, False
= 0).
Woah! It looks like our DataFrame has quite a few missing values.
Not to worry, we can work on fixing this later on.
How about we start by tring to turn all of our data in numbers?
TK (change heading?) - Convert strings to categories - TK - possible option: Inspecting the datatypes in our DataFrame
UPTOHERE - getting all values into numbers (e.g. objects -> categories)
One way to help turn all of our data into numbers is to convert the columns with the object
datatype into a category
datatype using pandas.CategoricalDtype
.
Note: There are many different ways to convert values into numbers. And often the best way will be specific to the value you're trying to convert. The method we're going to use, converting all objects (that are mostly strings) to categories is one of the faster methods as it makes a quick assumption that each unique value is its own number.
We can check the datatype of an individual column using the .dtype
attribute and we can get its full name using .dtype.name
.
Beautiful!
Now we've got a way to check a column's datatype individually.
There's also another group of methods to check a column's datatype directly.
For example, using pd.api.types.is_object_dtype(arr_or_dtype)
we can get a boolean response as to whether the input is an object or not.
Note: There are many more of these checks you can perform for other datatypes such as strings under a similar name space
pd.api.types.is_XYZ_dtype
. See the pandas documentation for more.
Let's see how it works on our df_tmp["UsageBand"]
column.
We can also check whether a column is a string with pd.api.types.is_string_dtype(arr_or_dtype)
.
Nice!
We can even loop through the items (columns and their labels) in our DataFrame using pandas.DataFrame.items()
(in Python dictionary terms, calling .items()
on a DataFrame will treat the column names as the keys and the column values as the values) and print out samples of columns which have the string
datatype.
As an extra check, passing the sample to pd.api.types.infer_dtype()
will return the datatype of the sample.
This will be a good way to keep exploring our data.
Hmm... it seems that there are many more columns in the df_tmp
with the object
type that didn't display when checking for the string datatype (we know there are many object
datatype columns in our DataFrame from using df_tmp.info()
).
How about we try the same as above, except this time instead of pd.api.types.is_string_dtype
, we use pd.api.types.is_object_dtype
?
Let's try it.
Wonderful, looks like we've got sample outputs from all of the columns with the object
datatype.
It also looks like that many of random samples are missing values.
TK - Converting strings to categories
In pandas, one way to convert object/string values to numerical values is to convert them to categories or more specifically, the pd.CategoricalDtype
datatype.
This datatype keeps the underlying data the same (e.g. doesn't change the string) but enables easy conversion to a numeric code using .cat.codes
.
For example, the column state
might have the values 'Alabama', 'Alaska', 'Arizona'...
and these could be mapped to numeric values 1, 2, 3...
respectively.
To see this in action, let's first convert the object datatype columns to "category"
datatype.
We can do so by looping through the .items()
of our DataFrame and reassigning each object datatype column using pandas.Series.astype(dtype="category")
.
Wonderful!
Now let's check if it worked by calling .info()
on our DataFrame.
It looks like it worked!
All of the object datatype columns now have the category datatype.
We can inspect this on a single column using pandas.Series.dtype
.
Excellent, notice how the column is now of type pd.CategoricalDtype
.
We can also access these categories using pandas.Series.cat.categories
.
Finally, we can get the category codes (the numeric values representing the category) using pandas.Series.cat.codes
.
This gives us a numeric representation of our object/string datatype columns.
UPTOHERE - filling missing values, perhaps it's better to create a separate section for this... we don't necessarily need to save the updated values either? TK - Could do:
try to fit model (doesn't work)
still have missing values
save values with categories updated
fill missing values
fit model (works)
what's wrong with this?
import valid/train datasets separately + update to numerical + fill missing values with Scikit-Learn (as an alternative)
fit model...
All of our data is categorical and thus we can now turn the categories into numbers, however it's still missing values...
TK - Saving our preprocessed data (part 1)
We've updated our dataset to turn object datatypes into categories.
However, it still contains missing values.
Before we get to those, how about we save our current DataFrame to file so we could import it again later if necessary.
Saving and updating your dataset as you go is common practice in machine learning problems. As your problem changes and evolves, the dataset you're working with will likely change too.
Making checkpoints of your dataset is similar to making checkpoints of your code.
Now we've saved our preprocessed data to file, we can re-import it and make sure it's in the same format.
Excellent, looking at the tale end (the far right side) our processed DataFrame has the columns we added to it (the extra data features) but it's still missing values.
But if we check df_tmp.info()
...
We notice that all of the category
datatype columns are back to the object
datatype.
This is strange since we already converted the object
datatype columns to category
.
Well then why did they change back?
This happens because of the limitations of the CSV (.csv
) file format, it doesn't preserve data types, rather it stores all the values as strings.
So when we read in a CSV, pandas defaults to interpreting strings as object
datatypes.
Not to worry though, we can easily convert them to the category
datatype as we did before.
Note: If you'd like to retain the datatypes when saving your data, you can use file formats such as
parquet
(Apache Parquet) andfeather
. These filetypes have several advantages over CSV in terms of processing speeds and storage size. However, data stored in these formats is not human-readable so you won't be able to open the files and inspect them without specific tools. For more on different file formats in pandas, see the IO tools documentation page.
Now if we wanted to preserve the datatypes of our data, we can save to parquet
or feather
format.
Let's try using parquet
format.
To do so, we can use the pandas.DataFrame.to_parquet()
method.
Files in the parquet
format typically have the file extension of .parquet
.
Wonderful! Now let's try importing our DataFrame from the parquet
format and check it using df_tmp.info()
.
Nice! Looks like using the parquet
format preserved all of our datatypes.
For more on the parquet
and feather
formats, be sure to check out the pandas IO (input/output) documentation.
TK - Finding and filling missing values
Let's remind ourselves of the missing values by getting the top 20 columns with the most missing values.
Ok, it seems like there are a fair few columns with missing values and there are several datatypes across these columns (numerical, categorical).
How about we break the problem down and work on filling each datatype separately?
TK - Filling missing numerical values
There's no set way to fill missing values in your dataset.
And unless you're filling the missing samples with newly discovered actual data, every way you fill your dataset's missing values will introduce some sort of noise or bias.
We'll start by filling the missing numerical values in ourdataet.
To do this, we'll first find the numeric datatype columns.
We can do by looping through the columns in our DataFrame and calling pd.api.types.is_numeric_dtype(arr_or_dtype)
on them.
Beautiful! Looks like we've got a mixture of int64
and float64
numerical datatypes.
Now how about we find out which numeric columns are missing values?
We can do so by using pandas.isnull(obj).sum()
to detect and sum the missing values in a given array-like object (in our case, the data in a target column).
Let's loop through our DataFrame columns, find the numeric datatypes and check if they have any missing values.
Okay, it looks like our auctioneerID
and MachineHoursCurrentMeter
columns have missing numeric values.
As previously discussed, there are many ways to fill missing values.
For missing numeric values, some potential options are:
Method | Pros | Cons |
---|---|---|
Fill with mean of column | - Easy to calculate/implement - Retains overall data distribution | - Averages out variation - Affected by outliers (e.g. if one value is much higher/lower than others) |
Fill with median of column | - Easy to calculate/implement - Robust to outliers - Preserves center of data | - Ignores data distribution shape |
Fill with mode of column | - Easy to calculate/implement - More useful for categorical-like data | - May not make sense for continuous/numerical data |
Fill with 0 (or another constant) | - Simple to implement - Useful in certain contexts like counts | - Introduces bias (e.g. if 0 was a value that meant something) - Skews data (e.g. if many missing values, replacing all with 0 makes it look like that's the most common value) |
Forward/Backward fill (use previous/future values to fill future/previous values) | - Maintains temporal continuity (for time series) | - Assumes data is continuous, which may not be valid |
Use a calculation from other columns | - Takes existing information and reinterprets it | - Can result in unlikely outputs if calculations are not continuous |
Interpolate (e.g. like dragging a cell in Excel/Google Sheets) | - Captures trends - Suitable for ordered data | - Can introduce errors - May assume linearity (data continues in a straight line) |
Drop missing values | - Ensures complete data (only use samples with all information) - Useful for small datasets | - Can result in data loss (e.g. if many missing values are scattered across columns, data size can be dramatically reduced) - Reduces dataset size |
Which method you choose will be dataset and problem dependant and will likely require several phases of experimentation to see what works and what doesn't.
For now, we'll fill our missing numeric values with the median value of the target column.
We'll also add a binary column (0 or 1) with rows reflecting whether or not a value was missing.
For example, MachineHoursCurrentMeter_is_missing
will be a column with rows which have a value of 0
if that row's MachineHoursCurrentMeter
column was not missing and 1
if it was.
Why add a binary column indicating whether the data was missing or not?
We can easily fill all of the missing numeric values in our dataset with the median.
However, a numeric value may be missing for a reason.
Adding a binary column which indicates whether the value was missing or not helps to retain this information. It also means we can inspect these rows later on.
Missing numeric values filled!
How about we check again whether or not the numeric columns have missing values?
Woohoo! Numeric missing values filled!
And thanks to our binary _is_missing
columns, we can even check how many were missing.
TK - Filling missing categorical values
UPTOHERE
filling missing categorical variables
save the data again with numeric + filled values
fit model
eval model
discuss the mistake (mixing train + val datasets) + how to fix it
continue into splitting data section
Now we've filled the numeric values, we'll do the same with the categorical values whilst ensuring that they are all numerical too.
Let's first investigate the columns which aren't numeric (we've already worked with these).
Okay, we've got plenty of category type columns.
Let's now write some code to fill the missing categorical values as well as ensure they are numerical (non-string).
To do so, we'll:
Create a blank column to category dictionary, we'll use this to store categorical value names (e.g. their string name) as well as their categorical code. We'll end with a dictionary of dictionaries in the form
{"column_name": {category_code: "category_value"...}...}
.Loop through the items in the DataFrame.
Check if the column is numeric or not.
Add a binary column in the form
ORIGINAL_COLUMN_NAME_is_missing
with a0
or1
value for if the row had a missing value.Ensure the column values are in the
pd.Categorical
datatype and get their category codes withpd.Series.cat.codes
(we'll add1
to these values since pandas defaults to assigning-1
toNaN
values, we'll use0
instead).Turn the column categories and column category codes from 5 into a dictionary with Python's
dict(zip(category_names, category_codes))
and save this to the blank dictionary from 1 with the target column name as key.Set the target column value to the numerical category values from 5.
Phew!
That's a fair few steps but nothing we can't handle.
Let's do it!
Ho ho! No errors!
Let's check out a few random samples of our DataFrame.
Beautiful! Looks like our data is all in numerical form.
How about we investigate an item from our column_to_category_dict
?
This will show the mapping from numerical value to category (most likely a string) value.
Note: Categorical values do not necessarily have order. They are strictly a mapping from number to value. In this case, our categorical values are mapped in numerical order. If you feel that the order of a value may influence a model in a negative way (e.g.
1 -> High
is lower than3 -> Medium
but should be higher), you may want to look into ordering the values in a particular way or using a different numerical encoding technique such as one-hot encoding.
And we can do the same for the state
column values.
Beautiful!
How about we check to see all of the missing values have been filled?
TK - Saving our preprocessed data (part 2)
One more step before we train new model!
Let's save our work so far so we could re-import our preprocessed dataset if we wanted to.
We'll save it to the parquet
format again, this time with a suffix to show we've filled the missing values.
And to make sure it worked, we can re-import it.
Does it have any missing values?
Checkpoint reached!
We've turned all of our data into numbers as well as filled the missing values, time to try fitting a model to it again.
TK - Fitting a machine learning model to our preprocessed data
UPTOHERE
fitting a model to the data... (could fit to a subset for quicker times...)
what's wrong with it? (fitting and evaluting on the same data)
Now all of our data is numeric and there are no missing values, we should be able to fit a machine learning model to it!
Let's reinstantiate our trusty sklearn.ensemble.RandomForestRegressor()
model.
Since our dataset has a substantial amount of rows (~400k+), let's first make sure the model will work on a smaller sample of 1000 or so.
Note: It's common practice on machine learning problems to see if your experiments will work on smaller scale problems (e.g. smaller amounts of data) before scaling them up to the full dataset. This practice enables you to try many different kinds of experiments with faster runtimes. The benefit of this is that you can figure out what doesn't work before spending more time on what does.
Our X
values (features) will be every column except the "SalePrice"
column.
And our y
values (labels) will be the entirety of the "SalePrice"
column.
We'll time how long our smaller experiment takes using the magic function %%time
and placing it at the top of the notebook cell.
Note: You can find out more about the
%%time
magic command by typing%%time?
(note the question mark on the end) in a notebook cell.
Woah! It looks like things worked!
And quite quick too (since we're only using a relatively small number of rows).
How about we score our model?
We can do so using the built-in method score()
. By default, sklearn.ensemble.RandomForestRegressor
uses coefficient of determination ( or R-squared) as the evaluation metric (higher is better, with a score of 1.0 being perfect).
Wow, it looks like our model got a pretty good score on only 1000 samples (the best possible score it could achieve would've been 1.0).
How about we try our model on the whole dataset?
Ok, that took a little bit longer than fitting on 1000 samples (but that's too be expected, as many more calculations had to be made).
There's a reason we used n_jobs=-1
too.
If we stuck with the default of n_jobs=None
(the same as n_jobs=1
), it would've taken much longer.
Configuration (MacBook Pro M1 Pro, 10 Cores) | CPU Times (User) | CPU Times (Sys) | CPU Times (Total) | Wall Time |
---|---|---|---|---|
n_jobs=-1 (all cores) | 9min 14s | 3.85s | 9min 18s | 1min 15s |
n_jobs=None (default) | 7min 14s | 1.75s | 7min 16s | 7min 25s |
And as we've discussed many times, one of the main goals when starting a machine learning project is to reduce your time between experiments.
How about we score the model trained on all of the data?
An even better score!
Oh wait...
Oh no...
I think we've got an error... (you might've noticed it already)
Why might this metric be unreliable?
Hint: Compare the data we trained on versus the data we evaluated on.
TK - A big (but fixable) mistake
One of the hard things about bugs in machine learning projects is that they are often silent.
For example, our model seems to have fit the data with no issues and then evaluated with a good score.
So what's wrong?
It seems we've stumbled across one of the most common bugs in machine learning and that's data leakage (data from the training set leaking into the validation/testing sets).
We've evaluated our model on the same data it was trained on.
This isn't the model's fault either.
It's our fault.
Right back at the start we imported a file called TrainAndValid.csv
, this contains both the training and validation data.
And while we preprocessed it to make sure there were no missing values and the samples were all numeric, we never split the data into separate training and validation splits.
The right workflow would've been to train the model on the training split and then evaluate it on the unseen validation split.
Our evaluation scores above are quite good but they can't necessarily be trusted to be replicated on unseen data (data in the real world) because they've been obtained by evaluating the model on data its already seen.
This would be the equivalent of a final exam at university containing all of the same questions as the practice exam without any changes.
Not to worry, we can fix this!
How?
We can import the training and validation datasets separately via Train.csv
and Valid.csv
respectively.
Or we could import TrainAndValid.csv
and perform the appropriate splits according the original Kaggle competition page (training data includes all samples prior to 2012 and validation data includes samples from January 1 2012 to April 30 2012).
In both methods, we'll have to perform the same preprocessing steps we've done so far.
Except because the validation data is supposed to remain as unseen data, we'll only use information from the training set to preprocess the validation set (and not mix the two).
We'll work on this in the next section.
The takeaway?
Always (if possible) create appropriate data splits at the start of a project.
Because it's one thing to train a machine learning model but if you can't evaluate it properly (on unseen data), how can you know how it'll perform (or may perform) in the real world on new and unseen data?
3. TK - Splitting data into train/valid sets
UPTOHERE
TK - trying to fit a model forced us to prepare our dataset in a way that it could be used with a model but caused us to make the mistake of mixing the training/validation data (perhaps this was on purpose...)
TK - can just import the Train/Valid CSVs separately and fill with Scikit-Learn imputers
Good new is, we get to practice preprocessing our data again. This time with separate training and validation splits. Last time we used pandas to make ensure our data was all numeric and had no missing values. But using pandas in this way can be a bit of an issue with larger scale datasets or when new data is introduced. How about this time we use Scikit-Learn and make a reproducible pipeline for our data preprocessing needs?
Next steps:
import train/validation data separately
create scikit-learn data filling pipeline for fitting to training data (turn all data numeric + fill missing values)
use this preprocessing pipeline for applying to to validation data (e.g.
fit_transform
on train data -> onlytransform
on validation data)eval + improve on validation data
We imported the
TrainAndValid.csv
and filled missing values/evaluated on it already
According to the Kaggle data page, the validation set and test set are split according to dates.
This makes sense since we're working on a time series problem.
E.g. using past events to try and predict future events.
Knowing this, randomly splitting our data into train and test sets using something like train_test_split()
wouldn't work.
Instead, we split our data into training, validation and test sets using the date each sample occured.
In our case:
Training = all samples up until 2011
Valid = all samples form January 1, 2012 - April 30, 2012
Test = all samples from May 1, 2012 - November 2012
For more on making good training, validation and test sets, check out the post How (and why) to create a good validation set by Rachel Thomas.
Building an evaluation function
According to Kaggle for the Bluebook for Bulldozers competition, the evaluation function they use is root mean squared log error (RMSLE).
RMSLE = generally you don't care as much if you're off by $10 as much as you'd care if you were off by 10%, you care more about ratios rather than differences. MAE (mean absolute error) is more about exact differences.
It's important to understand the evaluation metric you're going for.
Since Scikit-Learn doesn't have a function built-in for RMSLE, we'll create our own.
We can do this by taking the square root of Scikit-Learn's mean_squared_log_error (MSLE). MSLE is the same as taking the log of mean squared error (MSE).
We'll also calculate the MAE and R^2 for fun.
TK - use RMSLE from scikit-learn, see: https://scikit-learn.org/1.5/modules/generated/sklearn.metrics.root_mean_squared_log_error.html#sklearn.metrics.root_mean_squared_log_error
Testing our model on a subset (to tune the hyperparameters)
Retraing an entire model would take far too long to continuing experimenting as fast as we want to.
So what we'll do is take a sample of the training set and tune the hyperparameters on that before training a larger model.
If you're experiments are taking longer than 10-seconds (give or take how long you have to wait), you should be trying to speed things up. You can speed things up by sampling less data or using a faster computer.
Depending on your computer (mine is a MacBook Pro), making calculations on ~400,000 rows may take a while...
Let's alter the number of samples each n_estimator
in the RandomForestRegressor
see's using the max_samples
parameter.
Setting max_samples
to 10000 means every n_estimator
(default 100) in our RandomForestRegressor
will only see 10000 random samples from our DataFrame instead of the entire 400,000.
In other words, we'll be looking at 40x less samples which means we'll get faster computation speeds but we should expect our results to worsen (simple the model has less samples to learn patterns from).
Beautiful, that took far less time than the model with all the data.
With this, let's try tune some hyperparameters.
Hyperparameter tuning with RandomizedSearchCV
You can increase n_iter
to try more combinations of hyperparameters but in our case, we'll try 20 and see where it gets us.
Remember, we're trying to reduce the amount of time it takes between experiments.
Train a model with the best parameters
In a model I prepared earlier, I tried 100 different combinations of hyperparameters (setting n_iter
to 100 in RandomizedSearchCV
) and found the best results came from the ones you see below.
Note: This kind of search on my computer (n_iter
= 100) took ~2-hours. So it's kind of a set and come back later experiment.
We'll instantiate a new model with these discovered hyperparameters and reset the max_samples
back to its original value.
With these new hyperparameters as well as using all the samples, we can see an improvement to our models performance.
You can make a faster model by altering some of the hyperparameters. Particularly by lowering n_estimators
since each increase in n_estimators
is basically building another small model.
However, lowering of n_estimators
or altering of other hyperparameters may lead to poorer results.
Make predictions on test data
Now we've got a trained model, it's time to make predictions on the test data.
Remember what we've done.
Our model is trained on data prior to 2011. However, the test data is from May 1 2012 to November 2012.
So what we're doing is trying to use the patterns our model has learned in the training data to predict the sale price of a Bulldozer with characteristics it's never seen before but are assumed to be similar to that of those in the training data.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/home/daniel/code/zero-to-mastery-ml/section-3-structured-data-projects/end-to-end-bluebook-bulldozer-price-regression.ipynb Cell 93 in <cell line: 2>()
<a href='vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a22544954414e2d525458227d/home/daniel/code/zero-to-mastery-ml/section-3-structured-data-projects/end-to-end-bluebook-bulldozer-price-regression.ipynb#Y156sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0'>1</a> # Let's see how the model goes predicting on the test data
----> <a href='vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a22544954414e2d525458227d/home/daniel/code/zero-to-mastery-ml/section-3-structured-data-projects/end-to-end-bluebook-bulldozer-price-regression.ipynb#Y156sdnNjb2RlLXJlbW90ZQ%3D%3D?line=1'>2</a> model.predict(df_test)
, in ForestRegressor.predict(self, X)
982 check_is_fitted(self)
983 # Check data
--> 984 X = self._validate_X_predict(X)
986 # Assign chunk of trees to jobs
987 n_jobs, _, _ = _partition_estimators(self.n_estimators, self.n_jobs)
, in BaseForest._validate_X_predict(self, X)
596 """
597 Validate X whenever one tries to predict, apply, predict_proba."""
598 check_is_fitted(self)
--> 599 X = self._validate_data(X, dtype=DTYPE, accept_sparse="csr", reset=False)
600 if issparse(X) and (X.indices.dtype != np.intc or X.indptr.dtype != np.intc):
601 raise ValueError("No support for np.int64 index based sparse matrices")
, in BaseEstimator._validate_data(self, X, y, reset, validate_separately, cast_to_ndarray, **check_params)
508 def _validate_data(
509 self,
510 X="no_validation",
(...)
515 **check_params,
516 ):
517 """Validate input data and set or check the `n_features_in_` attribute.
518
519 Parameters
(...)
577 validated.
578 """
--> 579 self._check_feature_names(X, reset=reset)
581 if y is None and self._get_tags()["requires_y"]:
582 raise ValueError(
583 f"This {self.__class__.__name__} estimator "
584 "requires y to be passed, but the target y is None."
585 )
, in BaseEstimator._check_feature_names(self, X, reset)
501 if not missing_names and not unexpected_names:
502 message += (
503 "Feature names must be in the same order as they were in fit.\n"
504 )
--> 506 raise ValueError(message)
ValueError: The feature names should match those that were passed during fit.
Feature names unseen at fit time:
- saledate
Feature names seen at fit time, yet now missing:
- Backhoe_Mounting_is_missing
- Blade_Extension_is_missing
- Blade_Type_is_missing
- Blade_Width_is_missing
- Coupler_System_is_missing
- ...
Ahhh... the test data isn't in the same format of our other data, so we have to fix it. Let's create a function to preprocess our data.
Preprocessing the test data
Our model has been trained on data formatted in the same way as the training data.
This means in order to make predictions on the test data, we need to take the same steps we used to preprocess the training data to preprocess the test data.
Remember: Whatever you do to the training data, you have to do to the test data.
Let's create a function for doing so (by copying the preprocessing steps we used above).
Question: Where would this function break?
Hint: What if the test data had different missing values to the training data?
Now we've got a function for preprocessing data, let's preprocess the test dataset into the same format as our training dataset.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/home/daniel/code/zero-to-mastery-ml/section-3-structured-data-projects/end-to-end-bluebook-bulldozer-price-regression.ipynb Cell 100 in <cell line: 2>()
<a href='vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a22544954414e2d525458227d/home/daniel/code/zero-to-mastery-ml/section-3-structured-data-projects/end-to-end-bluebook-bulldozer-price-regression.ipynb#Y166sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0'>1</a> # Make predictions on the test dataset using the best model
----> <a href='vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a22544954414e2d525458227d/home/daniel/code/zero-to-mastery-ml/section-3-structured-data-projects/end-to-end-bluebook-bulldozer-price-regression.ipynb#Y166sdnNjb2RlLXJlbW90ZQ%3D%3D?line=1'>2</a> test_preds = ideal_model.predict(df_test)
, in ForestRegressor.predict(self, X)
982 check_is_fitted(self)
983 # Check data
--> 984 X = self._validate_X_predict(X)
986 # Assign chunk of trees to jobs
987 n_jobs, _, _ = _partition_estimators(self.n_estimators, self.n_jobs)
, in BaseForest._validate_X_predict(self, X)
596 """
597 Validate X whenever one tries to predict, apply, predict_proba."""
598 check_is_fitted(self)
--> 599 X = self._validate_data(X, dtype=DTYPE, accept_sparse="csr", reset=False)
600 if issparse(X) and (X.indices.dtype != np.intc or X.indptr.dtype != np.intc):
601 raise ValueError("No support for np.int64 index based sparse matrices")
, in BaseEstimator._validate_data(self, X, y, reset, validate_separately, cast_to_ndarray, **check_params)
508 def _validate_data(
509 self,
510 X="no_validation",
(...)
515 **check_params,
516 ):
517 """Validate input data and set or check the `n_features_in_` attribute.
518
519 Parameters
(...)
577 validated.
578 """
--> 579 self._check_feature_names(X, reset=reset)
581 if y is None and self._get_tags()["requires_y"]:
582 raise ValueError(
583 f"This {self.__class__.__name__} estimator "
584 "requires y to be passed, but the target y is None."
585 )
, in BaseEstimator._check_feature_names(self, X, reset)
501 if not missing_names and not unexpected_names:
502 message += (
503 "Feature names must be in the same order as they were in fit.\n"
504 )
--> 506 raise ValueError(message)
ValueError: The feature names should match those that were passed during fit.
Feature names seen at fit time, yet now missing:
- auctioneerID_is_missing
We've found an error and it's because our test dataset (after preprocessing) has 101 columns where as, our training dataset (X_train
) has 102 columns (after preprocessing).
Let's find the difference.
In this case, it's because the test dataset wasn't missing any auctioneerID
fields.
To fix it, we'll add a column to the test dataset called auctioneerID_is_missing
and fill it with False
, since none of the auctioneerID
fields are missing in the test dataset.
There's one more step we have to do before we can make predictions on the test data.
And that's to line up the columns (the features) in our test dataset to match the columns in our training dataset.
As in, the order of the columnns in the training dataset, should match the order of the columns in our test dataset.
Note: As of Scikit-Learn 1.2, the order of columns that were fit on should match the order of columns that are predicted on.
Now the test dataset column names and column order matches the training dataset, we should be able to make predictions on it using our trained model.
When looking at the Kaggle submission requirements, we see that if we wanted to make a submission, the data is required to be in a certain format. Namely, a DataFrame containing the SalesID
and the predicted SalePrice
of the bulldozer.
Let's make it.
TK - Add a section where we create a purely custom sample using the available columns, e.g. a custom bulldozer sale built into an app -> model outputs price prediction
Feature Importance
Since we've built a model which is able to make predictions. The people you share these predictions with (or yourself) might be curious of what parts of the data led to these predictions.
This is where feature importance comes in. Feature importance seeks to figure out which different attributes of the data were most important when it comes to predicting the target variable.
In our case, after our model learned the patterns in the data, which bulldozer sale attributes were most important for predicting its overall sale price?
Beware: the default feature importances for random forests can lead to non-ideal results.
To find which features were most important of a machine learning model, a good idea is to search something like "[MODEL NAME] feature importance".
Doing this for our RandomForestRegressor
leads us to find the feature_importances_
attribute.
Let's check it out.
Extensions and Extra-curriculum
Extra-curriculum: read pandas io tools for info on parquet/feather data formats - IO tools documentation page.
See all of the pandas dtypes in the pandas user guide: https://pandas.pydata.org/docs/user_guide/basics.html#dtypes
Note: There are some ML models such as
sklearn.ensemble.HistGradientBoostingRegressor
, CatBoost and XGBoost which can handle missing values, however, I'll leave exploring each of these as extra-curriculum/extensions.