Path: blob/master/examples/structured_data/ipynb/wide_deep_cross_networks.ipynb
3508 views
Structured data learning with Wide, Deep, and Cross networks
Author: Khalid Salama
Date created: 2020/12/31
Last modified: 2025/01/03
Description: Using Wide & Deep and Deep & Cross networks for structured data classification.
Introduction
This example demonstrates how to do structured data classification using the two modeling techniques:
Wide & Deep models
Deep & Cross models
Note that this example should be run with TensorFlow 2.5 or higher.
The dataset
This example uses the Covertype dataset from the UCI Machine Learning Repository. The task is to predict forest cover type from cartographic variables. The dataset includes 506,011 instances with 12 input features: 10 numerical features and 2 categorical features. Each instance is categorized into 1 of 7 classes.
Setup
Prepare the data
First, let's load the dataset from the UCI Machine Learning Repository into a Pandas DataFrame:
The two categorical features in the dataset are binary-encoded. We will convert this dataset representation to the typical representation, where each categorical feature is represented as a single integer value.
The shape of the DataFrame shows there are 13 columns per sample (12 for the features and 1 for the target label).
Let's split the data into training (85%) and test (15%) sets.
Next, store the training and test data in separate CSV files.
Define dataset metadata
Here, we define the metadata of the dataset that will be useful for reading and parsing the data into input features, and encoding the input features with respect to their types.
Experiment setup
Next, let's define an input function that reads and parses the file, then converts features and labels into atf.data.Dataset
for training or evaluation.
Here we configure the parameters and implement the procedure for running a training and evaluation experiment given a model.
Create model inputs
Now, define the inputs for the models as a dictionary, where the key is the feature name, and the value is a keras.layers.Input
tensor with the corresponding feature shape and data type.
Encode features
We create two representations of our input features: sparse and dense:
In the sparse representation, the categorical features are encoded with one-hot encoding using the
CategoryEncoding
layer. This representation can be useful for the model to memorize particular feature values to make certain predictions.In the dense representation, the categorical features are encoded with low-dimensional embeddings using the
Embedding
layer. This representation helps the model to generalize well to unseen feature combinations.
Experiment 1: a baseline model
In the first experiment, let's create a multi-layer feed-forward network, where the categorical features are one-hot encoded.
Let's run it:
The baseline linear model achieves ~76% test accuracy.
Experiment 2: Wide & Deep model
In the second experiment, we create a Wide & Deep model. The wide part of the model a linear model, while the deep part of the model is a multi-layer feed-forward network.
Use the sparse representation of the input features in the wide part of the model and the dense representation of the input features for the deep part of the model.
Note that every input features contributes to both parts of the model with different representations.
Let's run it:
The wide and deep model achieves ~79% test accuracy.
Experiment 3: Deep & Cross model
In the third experiment, we create a Deep & Cross model. The deep part of this model is the same as the deep part created in the previous experiment. The key idea of the cross part is to apply explicit feature crossing in an efficient way, where the degree of cross features grows with layer depth.
Let's run it:
The deep and cross model achieves ~81% test accuracy.
Conclusion
You can use Keras Preprocessing Layers to easily handle categorical features with different encoding mechanisms, including one-hot encoding and feature embedding. In addition, different model architectures — like wide, deep, and cross networks — have different advantages, with respect to different dataset properties. You can explore using them independently or combining them to achieve the best result for your dataset.