CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
y33-j3T

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: y33-j3T/Coursera-Deep-Learning
Path: blob/master/Build Better Generative Adversarial Networks (GANs)/Week 1 - Evaluation of GANs/C2W1_Assignment.ipynb
Views: 13373
Kernel: Python 3

Evaluating GANs

Goals

In this notebook, you're going to gain a better understanding of some of the challenges that come with evaluating GANs and a response you can take to alleviate some of them called Fréchet Inception Distance (FID).

Learning Objectives

  1. Understand the challenges associated with evaluating GANs.

  2. Write code to evaluate the Fréchet Inception Distance.

Challenges With Evaluating GANs

Loss is Uninformative of Performance

One aspect that makes evaluating GANs challenging is that the loss tells us little about their performance. Unlike with classifiers, where a low loss on a test set indicates superior performance, a low loss for the generator or discriminator suggests that learning has stopped.

No Clear Non-human Metric

If you define the goal of a GAN as "generating images which look real to people" then it's technically possible to measure this directly: you can ask people to act as a discriminator. However, this takes significant time and money so ideally you can use a proxy for this. There is also no "perfect" discriminator that can differentiate reals from fakes - if there were, a lot of machine learning tasks would be solved 😉

In this notebook, you will implement Fréchet Inception Distance, one method which aims to solve these issues.

Getting Started

For this notebook, you will again be using CelebA. You will start by loading a pre-trained generator which has been trained on CelebA.

Here, you will import some useful libraries and packages. You will also be provided with the generator and noise code from earlier assignments.

import torch import numpy as np from torch import nn from tqdm.auto import tqdm from torchvision import transforms from torchvision.datasets import CelebA from torchvision.utils import make_grid from torch.utils.data import DataLoader import matplotlib.pyplot as plt torch.manual_seed(0) # Set for our testing purposes, please do not change! class Generator(nn.Module): ''' Generator Class Values: z_dim: the dimension of the noise vector, a scalar im_chan: the number of channels in the images, fitted for the dataset used, a scalar (CelebA is rgb, so 3 is your default) hidden_dim: the inner dimension, a scalar ''' def __init__(self, z_dim=10, im_chan=3, hidden_dim=64): super(Generator, self).__init__() self.z_dim = z_dim # Build the neural network self.gen = nn.Sequential( self.make_gen_block(z_dim, hidden_dim * 8), self.make_gen_block(hidden_dim * 8, hidden_dim * 4), self.make_gen_block(hidden_dim * 4, hidden_dim * 2), self.make_gen_block(hidden_dim * 2, hidden_dim), self.make_gen_block(hidden_dim, im_chan, kernel_size=4, final_layer=True), ) def make_gen_block(self, input_channels, output_channels, kernel_size=3, stride=2, final_layer=False): ''' Function to return a sequence of operations corresponding to a generator block of DCGAN; a transposed convolution, a batchnorm (except in the final layer), and an activation. Parameters: input_channels: how many channels the input feature representation has output_channels: how many channels the output feature representation should have kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size) stride: the stride of the convolution final_layer: a boolean, true if it is the final layer and false otherwise (affects activation and batchnorm) ''' if not final_layer: return nn.Sequential( nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride), nn.BatchNorm2d(output_channels), nn.ReLU(inplace=True), ) else: return nn.Sequential( nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride), nn.Tanh(), ) def forward(self, noise): ''' Function for completing a forward pass of the generator: Given a noise tensor, returns generated images. Parameters: noise: a noise tensor with dimensions (n_samples, z_dim) ''' x = noise.view(len(noise), self.z_dim, 1, 1) return self.gen(x) def get_noise(n_samples, z_dim, device='cpu'): ''' Function for creating noise vectors: Given the dimensions (n_samples, z_dim) creates a tensor of that shape filled with random numbers from the normal distribution. Parameters: n_samples: the number of samples to generate, a scalar z_dim: the dimension of the noise vector, a scalar device: the device type ''' return torch.randn(n_samples, z_dim, device=device)

Loading the Pre-trained Model

Now, you can set the arguments for the model and load the dataset:

  • z_dim: the dimension of the noise vector

  • image_size: the image size of the input to Inception (more details in the following section)

  • device: the device type

z_dim = 64 image_size = 299 device = 'cuda' transform = transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) in_coursera = True # Set this to false if you're running this outside Coursera if in_coursera: import numpy as np data = torch.Tensor(np.load('fid_images_tensor.npz', allow_pickle=True)['arr_0']) dataset = torch.utils.data.TensorDataset(data, data) else: dataset = CelebA(".", download=True, transform=transform)

Then, you can load and initialize the model with weights from a pre-trained model. This allows you to use the pre-trained model as if you trained it yourself.

gen = Generator(z_dim).to(device) gen.load_state_dict(torch.load(f"pretrained_celeba.pth", map_location=torch.device(device))["gen"]) gen = gen.eval()

Inception-v3 Network

Inception-V3 is a neural network trained on ImageNet to classify objects. You may recall from the lectures that ImageNet has over 1 million images to train on. As a result, Inception-V3 does a good job detecting features and classifying images. Here, you will load Inception-V3 as inception_model.

from torchvision.models import inception_v3 inception_model = inception_v3(pretrained=False) inception_model.load_state_dict(torch.load("inception_v3_google-1a9a5a14.pth")) inception_model.to(device) inception_model = inception_model.eval() # Evaluation mode

Fréchet Inception Distance

Fréchet Inception Distance (FID) was proposed as an improvement over Inception Score and still uses the Inception-v3 network as part of its calculation. However, instead of using the classification labels of the Inception-v3 network, it uses the output from an earlier layer—the layer right before the labels. This is often called the feature layer. Research has shown that deep convolutional neural networks trained on difficult tasks, like classifying many classes, build increasingly sophisticated representations of features going deeper into the network. For example, the first few layers may learn to detect different kinds of edges and curves, while the later layers may have neurons that fire in response to human faces.

To get the feature layer of a convolutional neural network, you can replace the final fully connected layer with an identity layer that simply returns whatever input it received, unchanged. This essentially removes the final classification layer and leaves you with the intermediate outputs from the layer before.

Optional hint for inception_model.fc
  1. You may find torch.nn.Identity() helpful.

# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED CELL: inception_model.fc # You want to replace the final fully-connected (fc) layer # with an identity function layer to cut off the classification # layer and get a feature extractor #### START CODE HERE #### inception_model.fc = torch.nn.Identity() #### END CODE HERE ####
# UNIT TEST test_identity_noise = torch.randn(100, 100) assert torch.equal(test_identity_noise, inception_model.fc(test_identity_noise)) print("Success!")
Success!

Fréchet Distance

Fréchet distance uses the values from the feature layer for two sets of images, say reals and fakes, and compares different statistical properties between them to see how different they are. Specifically, Fréchet distance finds the shortest distance needed to walk along two lines, or two curves, simultaneously. The most intuitive explanation of Fréchet distance is as the "minimum leash distance" between two points. Imagine yourself and your dog, both moving along two curves. If you walked on one curve and your dog, attached to a leash, walked on the other at the same pace, what is the least amount of leash that you can give your dog so that you never need to give them more slack during your walk? Using this, the Fréchet distance measures the similarity between these two curves.

The basic idea is similar for calculating the Fréchet distance between two probability distributions. You'll start by seeing what this looks like in one-dimensional, also called univariate, space.

Univariate Fréchet Distance

You can calculate the distance between two normal distributions XX and YY with means μX\mu_X and μY\mu_Y and standard deviations σX\sigma_X and σY\sigma_Y, as:

d(X,Y)=(μX−μY)2+(σX−σY)2d(X,Y) = (\mu_X-\mu_Y)^2 + (\sigma_X-\sigma_Y)^2

Pretty simple, right? Now you can see how it can be converted to be used in multi-dimensional, which is also called multivariate, space.

Multivariate Fréchet Distance

Covariance

To find the Fréchet distance between two multivariate normal distributions, you first need to find the covariance instead of the standard deviation. The covariance, which is the multivariate version of variance (the square of standard deviation), is represented using a square matrix where the side length is equal to the number of dimensions. Since the feature vectors you will be using have 2048 values/weights, the covariance matrix will be 2048 x 2048. But for the sake of an example, this is a covariance matrix in a two-dimensional space:

Σ=(1001)\Sigma = \left(\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right)

The value at location (i,j)(i, j) corresponds to the covariance of vector ii with vector jj. Since the covariance of ii with jj and jj with ii are equivalent, the matrix will always be symmetric with respect to the diagonal. The diagonal is the covariance of that element with itself. In this example, there are zeros everywhere except the diagonal. That means that the two dimensions are independent of one another, they are completely unrelated.

The following code cell will visualize this matrix.

#import os #os.environ['KMP_DUPLICATE_LIB_OK']='True' from torch.distributions import MultivariateNormal import seaborn as sns # This is for visualization mean = torch.Tensor([0, 0]) # Center the mean at the origin covariance = torch.Tensor( # This matrix shows independence - there are only non-zero values on the diagonal [[1, 0], [0, 1]] ) independent_dist = MultivariateNormal(mean, covariance) samples = independent_dist.sample((10000,)) res = sns.jointplot(samples[:, 0], samples[:, 1], kind="kde") plt.show()
Image in a Jupyter notebook

Now, here's an example of a multivariate normal distribution that has covariance:

Σ=(2−1−12)\Sigma = \left(\begin{array}{cc} 2 & -1\\ -1 & 2 \end{array}\right)

And see how it looks:

mean = torch.Tensor([0, 0]) covariance = torch.Tensor( [[2, -1], [-1, 2]] ) covariant_dist = MultivariateNormal(mean, covariance) samples = covariant_dist.sample((10000,)) res = sns.jointplot(samples[:, 0], samples[:, 1], kind="kde") plt.show()
Image in a Jupyter notebook

Formula

Based on the paper, "The Fréchet distance between multivariate normal distributions" by Dowson and Landau (1982), the Fréchet distance between two multivariate normal distributions XX and YY is:

d(X,Y)=∥μX−μY∥2+Tr(ΣX+ΣY−2ΣXΣY)d(X, Y) = \Vert\mu_X-\mu_Y\Vert^2 + \mathrm{Tr}\left(\Sigma_X+\Sigma_Y - 2 \sqrt{\Sigma_X \Sigma_Y}\right)

Similar to the formula for univariate Fréchet distance, you can calculate the distance between the means and the distance between the standard deviations. However, calculating the distance between the standard deviations changes slightly here, as it includes the matrix product and matrix square root. Tr\mathrm{Tr} refers to the trace, the sum of the diagonal elements of a matrix.

Now you can implement this!

Optional hints for frechet_distance
  1. You want to implement the above equation in code.

  2. You might find the functions torch.norm and torch.trace helpful here.

  3. A matrix_sqrt function is defined for you above -- you need to use it instead of torch.sqrt() which only gets the elementwise square root instead of the matrix square root.

  4. You can also use the @ symbol for matrix multiplication.

import scipy # This is the matrix square root function you will be using def matrix_sqrt(x): ''' Function that takes in a matrix and returns the square root of that matrix. For an input matrix A, the output matrix B would be such that B @ B is the matrix A. Parameters: x: a matrix ''' y = x.cpu().detach().numpy() y = scipy.linalg.sqrtm(y) return torch.Tensor(y.real, device=x.device)
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED FUNCTION: frechet_distance def frechet_distance(mu_x, mu_y, sigma_x, sigma_y): ''' Function for returning the Fréchet distance between multivariate Gaussians, parameterized by their means and covariance matrices. Parameters: mu_x: the mean of the first Gaussian, (n_features) mu_y: the mean of the second Gaussian, (n_features) sigma_x: the covariance matrix of the first Gaussian, (n_features, n_features) sigma_y: the covariance matrix of the second Gaussian, (n_features, n_features) ''' #### START CODE HERE #### return (mu_x - mu_y).dot(mu_x - mu_y) + torch.trace(sigma_x) + torch.trace(sigma_y) - 2 * torch.trace(matrix_sqrt(sigma_x @ sigma_y)) #### END CODE HERE ####
# UNIT TEST mean1 = torch.Tensor([0, 0]) # Center the mean at the origin covariance1 = torch.Tensor( # This matrix shows independence - there are only non-zero values on the diagonal [[1, 0], [0, 1]] ) dist1 = MultivariateNormal(mean1, covariance1) mean2 = torch.Tensor([0, 0]) # Center the mean at the origin covariance2 = torch.Tensor( # This matrix shows dependence [[2, -1], [-1, 2]] ) dist2 = MultivariateNormal(mean2, covariance2) assert torch.isclose( frechet_distance( dist1.mean, dist2.mean, dist1.covariance_matrix, dist2.covariance_matrix ), 4 - 2 * torch.sqrt(torch.tensor(3.)) ) assert (frechet_distance( dist1.mean, dist1.mean, dist1.covariance_matrix, dist1.covariance_matrix ).item() == 0) print("Success!")
Success!

Putting it all together!

Now, you can apply FID to your generator from earlier.

You will start by defining a bit of helper code to preprocess the image for the Inception-v3 network:

def preprocess(img): img = torch.nn.functional.interpolate(img, size=(299, 299), mode='bilinear', align_corners=False) return img

Then, you'll define a function to calculate the covariance of the features that returns a covariance matrix given a list of values:

import numpy as np def get_covariance(features): return torch.Tensor(np.cov(features.detach().numpy(), rowvar=False))

Finally, you can use the pre-trained Inception-v3 model to compute features of the real and fake images. With these features, you can then get the covariance and means of these features across many samples.

First, you get the features of the real and fake images using the Inception-v3 model:

fake_features_list = [] real_features_list = [] gen.eval() n_samples = 512 # The total number of samples batch_size = 4 # Samples per iteration dataloader = DataLoader( dataset, batch_size=batch_size, shuffle=True) cur_samples = 0 with torch.no_grad(): # You don't need to calculate gradients here, so you do this to save memory try: for real_example, _ in tqdm(dataloader, total=n_samples // batch_size): # Go by batch real_samples = real_example real_features = inception_model(real_samples.to(device)).detach().to('cpu') # Move features to CPU real_features_list.append(real_features) fake_samples = get_noise(len(real_example), z_dim).to(device) fake_samples = preprocess(gen(fake_samples)) fake_features = inception_model(fake_samples.to(device)).detach().to('cpu') fake_features_list.append(fake_features) cur_samples += len(real_samples) if cur_samples >= n_samples: break except: print("Error in loop")
HBox(children=(FloatProgress(value=0.0, max=128.0), HTML(value='')))

Then, you can combine all of the values that you collected for the reals and fakes into large tensors:

# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # UNIT TEST COMMENT: Needed as is for autograding fake_features_all = torch.cat(fake_features_list) real_features_all = torch.cat(real_features_list)

And calculate the covariance and means of these real and fake features:

# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED CELL # Calculate the covariance matrix for the fake and real features # and also calculate the means of the feature over the batch (for each feature dimension mean) #### START CODE HERE #### mu_fake = fake_features_all.mean(0) mu_real = real_features_all.mean(0) sigma_fake = get_covariance(fake_features_all) sigma_real = get_covariance(real_features_all) #### END CODE HERE ####
assert tuple(sigma_fake.shape) == (fake_features_all.shape[1], fake_features_all.shape[1]) assert torch.abs(sigma_fake[0, 0] - 2.5e-2) < 1e-2 and torch.abs(sigma_fake[-1, -1] - 5e-2) < 1e-2 assert tuple(sigma_real.shape) == (real_features_all.shape[1], real_features_all.shape[1]) assert torch.abs(sigma_real[0, 0] - 3.5768e-2) < 1e-4 and torch.abs(sigma_real[0, 1] + 5.3236e-4) < 1e-4 assert tuple(mu_fake.shape) == (fake_features_all.shape[1],) assert tuple(mu_real.shape) == (real_features_all.shape[1],) assert torch.abs(mu_real[0] - 0.3099) < 0.01 and torch.abs(mu_real[1] - 0.2721) < 0.01 assert torch.abs(mu_fake[0] - 0.37) < 0.05 and torch.abs(mu_real[1] - 0.27) < 0.05 print("Success!")
Success!

At this point, you can also visualize what the pairwise multivariate distributions of the inception features look like!

indices = [2, 4, 5] fake_dist = MultivariateNormal(mu_fake[indices], sigma_fake[indices][:, indices]) fake_samples = fake_dist.sample((5000,)) real_dist = MultivariateNormal(mu_real[indices], sigma_real[indices][:, indices]) real_samples = real_dist.sample((5000,)) import pandas as pd df_fake = pd.DataFrame(fake_samples.numpy(), columns=indices) df_real = pd.DataFrame(real_samples.numpy(), columns=indices) df_fake["is_real"] = "no" df_real["is_real"] = "yes" df = pd.concat([df_fake, df_real]) sns.pairplot(df, plot_kws={'alpha': 0.1}, hue='is_real')
<seaborn.axisgrid.PairGrid at 0x7ff32c54a320>
Image in a Jupyter notebook

Lastly, you can use your earlier frechet_distance function to calculate the FID and evaluate your GAN. You can see how similar/different the features of the generated images are to the features of the real images. The next cell might take five minutes or so to run in Coursera.

with torch.no_grad(): print(frechet_distance(mu_real, mu_fake, sigma_real, sigma_fake).item())
136.66064453125

You'll notice this model gets a pretty high FID, likely over 30. Since lower is better, and the best models on CelebA get scores in the single-digits, there's clearly a long way to go with this model. You can use FID to compare different models, as well as different stages of training of the same model.