Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/master/Apply Generative Adversarial Networks (GANs)/Week 1 - GANs for Data Augmentation and Privacy/C3W1_Assignment.ipynb
Views: 13373
Data Augmentation
Goals
In this notebook you're going to build a generator that can be used to help create data to train a classifier. There are many cases where this might be useful. If you are interested in any of these topics, you are welcome to explore the linked papers and articles!
With smaller datasets, GANs can provide useful data augmentation that substantially improve classifier performance.
You have one type of data already labeled and would like to make predictions on another related dataset for which you have no labels. (You'll learn about the techniques for this use case in future notebooks!)
You want to protect the privacy of the people who provided their information so you can provide access to a generator instead of real data.
You have input data with many missing values, where the input dimensions are correlated and you would like to train a model on complete inputs.
You would like to be able to identify a real-world abnormal feature in an image for the purpose of diagnosis, but have limited access to real examples of the condition.
In this assignment, you're going to be acting as a bug enthusiast — more on that later.
Learning Objectives
Understand some use cases for data augmentation and why GANs suit this task.
Implement a classifier that takes a mixed dataset of reals/fakes and analyze its accuracy.
Getting Started
Data Augmentation
Before you implement GAN-based data augmentation, you should know a bit about data augmentation in general, specifically for image datasets. It is very common practice to augment image-based datasets in ways that are appropriate for a given dataset. This may include having your dataloader randomly flipping images across their vertical axis, randomly cropping your image to a particular size, randomly adding a bit of noise or color to an image in ways that are true-to-life.
In general, data augmentation helps to stop your model from overfitting to the data, and allows you to make small datasets many times larger. However, a sufficiently powerful classifier often still overfits to the original examples which is why GANs are particularly useful here. They can generate new images instead of simply modifying existing ones.
CIFAR
The CIFAR-10 and CIFAR-100 datasets are extremely widely used within machine learning -- they contain many thousands of “tiny” 32x32 color images of different classes representing relatively common real-world objects like airplanes and dogs, with 10 classes in CIFAR-10 and 100 classes in CIFAR-100. In CIFAR-100, there are 20 “superclasses” which each contain five classes. For example, the “fish” superclass contains “aquarium fish, flatfish, ray, shark, trout”. For the purposes of this assignment, you’ll be looking at a small subset of these images to simulate a small data regime, with only 40 images of each class for training.
Initializations
You will begin by importing some useful libraries and packages and defining a visualization function that has been provided. You will also be re-using your conditional generator and functions code from earlier assignments. This will let you control what class of images to augment for your classifier.
Generator
Training
Now you can begin training your models. First, you will define some new parameters:
cifar100_shape: the number of pixels in each CIFAR image, which has dimensions 32 x 32 and three channel (for red, green, and blue) so 3 x 32 x 32
n_classes: the number of classes in CIFAR100 (e.g. airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck)
And you also include the same parameters from previous assignments:
criterion: the loss function
n_epochs: the number of times you iterate through the entire dataset when training
z_dim: the dimension of the noise vector
display_step: how often to display/visualize the images
batch_size: the number of images per forward/backward pass
lr: the learning rate
device: the device type
Then, you want to set your generator's input dimension. Recall that for conditional GANs, the generator's input is the noise vector concatenated with the class vector.
Classifier
For the classifier, you will use the same code that you wrote in an earlier assignment (the same as previous code for the discriminator as well since the discriminator is a real/fake classifier).
Pre-training (Optional)
You are provided the code to pre-train the models (GAN and classifier) given to you in this assignment. However, this is intended only for your personal curiosity -- for the assignment to run as intended, you should not use any checkpoints besides the ones given to you.
Tuning the Classifier
After two courses, you've probably had some fun debugging your GANs and have started to consider yourself a bug master. For this assignment, your mastery will be put to the test on some interesting bugs... well, bugs as in insects.
As a bug master, you want a classifier capable of classifying different species of bugs: bees, beetles, butterflies, caterpillar, and more. Luckily, you found a great dataset with a lot of animal species and objects, and you trained your classifier on that.
But the bug classes don't do as well as you would like. Now your plan is to train a GAN on the same data so it can generate new bugs to make your classifier better at distinguishing between all of your favorite bugs!
You will fine-tune your model by augmenting the original real data with fake data and during that process, observe how to increase the accuracy of your classifier with these fake, GAN-generated bugs. After this, you will prove your worth as a bug master.
Sampling Ratio
Suppose that you've decided that although you have this pre-trained general generator and this general classifier, capable of identifying 100 classes with some accuracy (~17%), what you'd really like is a model that can classify the five different kinds of bugs in the dataset. You'll fine-tune your model by augmenting your data with the generated images. Keep in mind that both the generator and the classifier were trained on the same images: the 40 images per class you painstakingly found so your generator may not be great. This is the caveat with data augmentation, ultimately you are still bound by the real data that you have but you want to try and create more. To make your models even better, you would need to take some more bug photos, label them, and add them to your training set and/or use higher quality photos.
To start, you'll first need to write some code to sample a combination of real and generated images. Given a probability, p_real
, you'll need to generate a combined tensor where roughly p_real
of the returned images are sampled from the real images. Note that you should not interpolate the images here: you should choose each image from the real or fake set with a given probability. For example, if your real images are a tensor of [[1, 2, 3, 4, 5]]
and your fake images are a tensor of [[-1, -2, -3, -4, -5]]
, and p_real = 0.2
, two potential return values are [[1, -2, 3, -4, -5]]
or [[-1, 2, -3, -4, -5]]
In addition, we will expect the images to remain in the same order to maintain their alignment with their labels (this applies to the fake images too!).
Optional hints for combine_sample
This code probably shouldn't be much longer than 3 lines
You can index using a set of booleans which have the same length as your tensor
You want to generate an unbiased sample, which you can do (for example) with
torch.rand(length_reals) > p
.There are many approaches here that will give a correct answer here. You may find
torch.rand
ortorch.bernoulli
useful.You don't want to edit an argument in place, so you may find
cur_tensor.clone()
useful too, which makes a copy ofcur_tensor
.
Now you have a challenge: find a p_real
and a generator image such that your classifier gets an average of a 51% accuracy or higher on the insects, when evaluated with the eval_augmentation
function. You'll need to fill in find_optimal
to find these parameters to solve this part! Note that if your answer takes a very long time to run, you may need to hard-code the solution it finds.
When you're training a generator, you will often have to look at different checkpoints and choose one that does the best (either empirically or using some evaluation method). Here, you are given four generator checkpoints: gen_1.pt
, gen_2.pt
, gen_3.pt
, gen_4.pt
. You'll also have some scratch area to write whatever code you'd like to solve this problem, but you must return a p_real
and an image name of your selected generator checkpoint. You can hard-code/brute-force these numbers if you would like, but you are encouraged to try to solve this problem in a more general way. In practice, you would also want a test set (since it is possible to overfit on a validation set), but for simplicity you can just focus on the validation set.
You'll likely find that the worst performance is when the generator is performing alone: this corresponds to the case where you might be trying to hide the underlying examples from the classifier. Perhaps you don't want other people to know about your specific bugs!
Here's a visualization of what the generator is actually generating, with real examples of each class above the corresponding generated image.