Path: blob/main/transformers_doc/en/training.ipynb
5906 views
Fine-tuning
Fine-tuning adapts a pretrained model to a specific task with a smaller specialized dataset. This approach requires far less data and compute compared to training a model from scratch, which makes it a more accessible option for many users.
Transformers provides the Trainer API, which offers a comprehensive set of training features, for fine-tuning any of the models on the Hub.
[!TIP] Learn how to fine-tune models for other tasks in our Task Recipes section in Resources!
This guide will show you how to fine-tune a model with Trainer to classify Yelp reviews.
Log in to your Hugging Face account with your user token to ensure you can access gated models and share your models on the Hub.
Start by loading the Yelp Reviews dataset and preprocess (tokenize, pad, and truncate) it for training. Use map to preprocess the entire dataset in one step.
[!TIP] Fine-tune on a smaller subset of the full dataset to reduce the time it takes. The results won't be as good compared to fine-tuning on the full dataset, but it is useful to make sure everything works as expected first before committing to training on the full dataset.
Trainer
Trainer is an optimized training loop for Transformers models, making it easy to start training right away without manually writing your own training code. Pick and choose from a wide range of training features in TrainingArguments such as gradient accumulation, mixed precision, and options for reporting and logging training metrics.
Load a model and provide the number of expected labels (you can find this information on the Yelp Review dataset card).
[!TIP] The message above is a reminder that the models pretrained head is discarded and replaced with a randomly initialized classification head. The randomly initialized head needs to be fine-tuned on your specific task to output meaningful predictions.
With the model loaded, set up your training hyperparameters in TrainingArguments. Hyperparameters are variables that control the training process - such as the learning rate, batch size, number of epochs - which in turn impacts model performance. Selecting the correct hyperparameters is important and you should experiment with them to find the best configuration for your task.
For this guide, you can use the default hyperparameters which provide a good baseline to begin with. The only settings to configure in this guide are where to save the checkpoint, how to evaluate model performance during training, and pushing the model to the Hub.
Trainer requires a function to compute and report your metric. For a classification task, you'll use evaluate.load to load the accuracy function from the Evaluate library. Gather the predictions and labels in compute to calculate the accuracy.
Set up TrainingArguments with where to save the model and when to compute accuracy during training. The example below sets it to "epoch"
, which reports the accuracy at the end of each epoch. Add push_to_hub=True
to upload the model to the Hub after training.
Finally, use push_to_hub() to upload your model and tokenizer to the Hub.