Path: blob/master/examples/nlp/ipynb/tweet-classification-using-tfdf.ipynb
3508 views
Text classification using Decision Forests and pretrained embeddings
Author: Gitesh Chawda
Date created: 09/05/2022
Last modified: 09/05/2022
Description: Using Tensorflow Decision Forests for text classification.
Introduction
TensorFlow Decision Forests (TF-DF) is a collection of state-of-the-art algorithms for Decision Forest models that are compatible with Keras APIs. The module includes Random Forests, Gradient Boosted Trees, and CART, and can be used for regression, classification, and ranking tasks.
In this example we will use Gradient Boosted Trees with pretrained embeddings to classify disaster-related tweets.
See also:
Install Tensorflow Decision Forest using following command : pip install tensorflow_decision_forests
Imports
Get the data
The Dataset is available on Kaggle
Dataset description:
Files:
train.csv: the training set
Columns:
id: a unique identifier for each tweet
text: the text of the tweet
location: the location the tweet was sent from (may be blank)
keyword: a particular keyword from the tweet (may be blank)
target: in train.csv only, this denotes whether a tweet is about a real disaster (1) or not (0)
The dataset includes 7613 samples with 5 columns:
Shuffling and dropping unnecessary columns:
Printing information about the shuffled dataframe:
Total number of "disaster" and "non-disaster" tweets:
Let's preview a few samples:
Splitting dataset into training and test sets:
Total number of "disaster" and "non-disaster" tweets in the training data:
Total number of "disaster" and "non-disaster" tweets in the test data:
Convert data to a tf.data.Dataset
Downloading pretrained embeddings
The Universal Sentence Encoder embeddings encode text into high-dimensional vectors that can be used for text classification, semantic similarity, clustering and other natural language tasks. They're trained on a variety of data sources and a variety of tasks. Their input is variable-length English text and their output is a 512 dimensional vector.
To learn more about these pretrained embeddings, see Universal Sentence Encoder.
Creating our models
We create two models. In the first model (model_1) raw text will be first encoded via pretrained embeddings and then passed to a Gradient Boosted Tree model for classification. In the second model (model_2) raw text will be directly passed to the Gradient Boosted Trees model.
Building model_1
Building model_2
Train the models
We compile our model by passing the metrics Accuracy
, Recall
, Precision
and AUC
. When it comes to the loss, TF-DF automatically detects the best loss for the task (Classification or regression). It is printed in the model summary.
Also, because they're batch-training models rather than mini-batch gradient descent models, TF-DF models do not need a validation dataset to monitor overfitting, or to stop training early. Some algorithms do not use a validation dataset (e.g. Random Forest) while some others do (e.g. Gradient Boosted Trees). If a validation dataset is needed, it will be extracted automatically from the training dataset.
Prints training logs of model_1
Prints training logs of model_2
The model.summary() method prints a variety of information about your decision tree model, including model type, task, input features, and feature importance.
Plotting training metrics
Evaluating on test data
Predicting on validation data
Concluding remarks
The TensorFlow Decision Forests package provides powerful models that work especially well with structured data. In our experiments, the Gradient Boosted Tree model with pretrained embeddings achieved 81.6% test accuracy while the plain Gradient Boosted Tree model had 54.4% accuracy.