Path: blob/master/examples/generative/random_walks_with_stable_diffusion.py
3507 views
"""1Title: A walk through latent space with Stable Diffusion2Authors: Ian Stenbit, [fchollet](https://twitter.com/fchollet), [lukewood](https://twitter.com/luke_wood_ml)3Date created: 2022/09/284Last modified: 2022/09/285Description: Explore the latent manifold of Stable Diffusion.6Accelerator: GPU7"""89"""10## Overview1112Generative image models learn a "latent manifold" of the visual world:13a low-dimensional vector space where each point maps to an image.14Going from such a point on the manifold back to a displayable image15is called "decoding" -- in the Stable Diffusion model, this is handled by16the "decoder" model.17181920This latent manifold of images is continuous and interpolative, meaning that:21221. Moving a little on the manifold only changes the corresponding image a little (continuity).232. For any two points A and B on the manifold (i.e. any two images), it is possible24to move from A to B via a path where each intermediate point is also on the manifold (i.e.25is also a valid image). Intermediate points would be called "interpolations" between26the two starting images.2728Stable Diffusion isn't just an image model, though, it's also a natural language model.29It has two latent spaces: the image representation space learned by the30encoder used during training, and the prompt latent space31which is learned using a combination of pretraining and training-time32fine-tuning.3334_Latent space walking_, or _latent space exploration_, is the process of35sampling a point in latent space and incrementally changing the latent36representation. Its most common application is generating animations37where each sampled point is fed to the decoder and is stored as a38frame in the final animation.39For high-quality latent representations, this produces coherent-looking40animations. These animations can provide insight into the feature map of the41latent space, and can ultimately lead to improvements in the training42process. One such GIF is displayed below:43444546In this guide, we will show how to take advantage of the Stable Diffusion API47in KerasCV to perform prompt interpolation and circular walks through48Stable Diffusion's visual latent manifold, as well as through49the text encoder's latent manifold.5051This guide assumes the reader has a52high-level understanding of Stable Diffusion.53If you haven't already, you should start54by reading the [Stable Diffusion Tutorial](https://keras.io/guides/keras_cv/generate_images_with_stable_diffusion/).5556To start, we import KerasCV and load up a Stable Diffusion model using the57optimizations discussed in the tutorial58[Generate images with Stable Diffusion](https://keras.io/guides/keras_cv/generate_images_with_stable_diffusion/).59Note that if you are running with a M1 Mac GPU you should not enable mixed precision.60"""6162"""shell63pip install keras-cv --upgrade --quiet64"""6566import keras_cv67import keras68import matplotlib.pyplot as plt69from keras import ops70import numpy as np71import math72from PIL import Image7374# Enable mixed precision75# (only do this if you have a recent NVIDIA GPU)76keras.mixed_precision.set_global_policy("mixed_float16")7778# Instantiate the Stable Diffusion model79model = keras_cv.models.StableDiffusion(jit_compile=True)8081"""82## Interpolating between text prompts8384In Stable Diffusion, a text prompt is first encoded into a vector,85and that encoding is used to guide the diffusion process.86The latent encoding vector has shape8777x768 (that's huge!), and when we give Stable Diffusion a text prompt, we're88generating images from just one such point on the latent manifold.8990To explore more of this manifold, we can interpolate between two text encodings91and generate images at those interpolated points:92"""9394prompt_1 = "A watercolor painting of a Golden Retriever at the beach"95prompt_2 = "A still life DSLR photo of a bowl of fruit"96interpolation_steps = 59798encoding_1 = ops.squeeze(model.encode_text(prompt_1))99encoding_2 = ops.squeeze(model.encode_text(prompt_2))100101interpolated_encodings = ops.linspace(encoding_1, encoding_2, interpolation_steps)102103# Show the size of the latent manifold104print(f"Encoding shape: {encoding_1.shape}")105106"""107Once we've interpolated the encodings, we can generate images from each point.108Note that in order to maintain some stability between the resulting images we109keep the diffusion noise constant between images.110"""111112seed = 12345113noise = keras.random.normal((512 // 8, 512 // 8, 4), seed=seed)114115images = model.generate_image(116interpolated_encodings,117batch_size=interpolation_steps,118diffusion_noise=noise,119)120121"""122Now that we've generated some interpolated images, let's take a look at them!123124Throughout this tutorial, we're going to export sequences of images as gifs so125that they can be easily viewed with some temporal context. For sequences of126images where the first and last images don't match conceptually, we rubber-band127the gif.128129If you're running in Colab, you can view your own GIFs by running:130131```132from IPython.display import Image as IImage133IImage("doggo-and-fruit-5.gif")134```135"""136137138def export_as_gif(filename, images, frames_per_second=10, rubber_band=False):139if rubber_band:140images += images[2:-1][::-1]141images[0].save(142filename,143save_all=True,144append_images=images[1:],145duration=1000 // frames_per_second,146loop=0,147)148149150export_as_gif(151"doggo-and-fruit-5.gif",152[Image.fromarray(img) for img in images],153frames_per_second=2,154rubber_band=True,155)156157"""158159160The results may seem surprising. Generally, interpolating between prompts161produces coherent looking images, and often demonstrates a progressive concept162shift between the contents of the two prompts. This is indicative of a high163quality representation space, that closely mirrors the natural structure164of the visual world.165166To best visualize this, we should do a much more fine-grained interpolation,167using hundreds of steps. In order to keep batch size small (so that we don't168OOM our GPU), this requires manually batching our interpolated169encodings.170"""171172interpolation_steps = 150173batch_size = 3174batches = interpolation_steps // batch_size175176interpolated_encodings = ops.linspace(encoding_1, encoding_2, interpolation_steps)177batched_encodings = ops.split(interpolated_encodings, batches)178179images = []180for batch in range(batches):181images += [182Image.fromarray(img)183for img in model.generate_image(184batched_encodings[batch],185batch_size=batch_size,186num_steps=25,187diffusion_noise=noise,188)189]190191export_as_gif("doggo-and-fruit-150.gif", images, rubber_band=True)192193"""194195196The resulting gif shows a much clearer and more coherent shift between the two197prompts. Try out some prompts of your own and experiment!198199We can even extend this concept for more than one image. For example, we can200interpolate between four prompts:201"""202203prompt_1 = "A watercolor painting of a Golden Retriever at the beach"204prompt_2 = "A still life DSLR photo of a bowl of fruit"205prompt_3 = "The eiffel tower in the style of starry night"206prompt_4 = "An architectural sketch of a skyscraper"207208interpolation_steps = 6209batch_size = 3210batches = (interpolation_steps**2) // batch_size211212encoding_1 = ops.squeeze(model.encode_text(prompt_1))213encoding_2 = ops.squeeze(model.encode_text(prompt_2))214encoding_3 = ops.squeeze(model.encode_text(prompt_3))215encoding_4 = ops.squeeze(model.encode_text(prompt_4))216217interpolated_encodings = ops.linspace(218ops.linspace(encoding_1, encoding_2, interpolation_steps),219ops.linspace(encoding_3, encoding_4, interpolation_steps),220interpolation_steps,221)222interpolated_encodings = ops.reshape(223interpolated_encodings, (interpolation_steps**2, 77, 768)224)225batched_encodings = ops.split(interpolated_encodings, batches)226227images = []228for batch in range(batches):229images.append(230model.generate_image(231batched_encodings[batch],232batch_size=batch_size,233diffusion_noise=noise,234)235)236237238def plot_grid(images, path, grid_size, scale=2):239fig, axs = plt.subplots(240grid_size, grid_size, figsize=(grid_size * scale, grid_size * scale)241)242fig.tight_layout()243plt.subplots_adjust(wspace=0, hspace=0)244plt.axis("off")245for ax in axs.flat:246ax.axis("off")247248images = images.astype(int)249for i in range(min(grid_size * grid_size, len(images))):250ax = axs.flat[i]251ax.imshow(images[i].astype("uint8"))252ax.axis("off")253254for i in range(len(images), grid_size * grid_size):255axs.flat[i].axis("off")256axs.flat[i].remove()257258plt.savefig(259fname=path,260pad_inches=0,261bbox_inches="tight",262transparent=False,263dpi=60,264)265266267images = np.concatenate(images)268plot_grid(images, "4-way-interpolation.jpg", interpolation_steps)269270"""271We can also interpolate while allowing diffusion noise to vary by dropping272the `diffusion_noise` parameter:273"""274275images = []276for batch in range(batches):277images.append(model.generate_image(batched_encodings[batch], batch_size=batch_size))278279images = np.concatenate(images)280plot_grid(images, "4-way-interpolation-varying-noise.jpg", interpolation_steps)281282"""283Next up -- let's go for some walks!284285## A walk around a text prompt286287Our next experiment will be to go for a walk around the latent manifold288starting from a point produced by a particular prompt.289"""290291walk_steps = 150292batch_size = 3293batches = walk_steps // batch_size294step_size = 0.005295296encoding = ops.squeeze(297model.encode_text("The Eiffel Tower in the style of starry night")298)299# Note that (77, 768) is the shape of the text encoding.300delta = ops.ones_like(encoding) * step_size301302walked_encodings = []303for step_index in range(walk_steps):304walked_encodings.append(encoding)305encoding += delta306walked_encodings = ops.stack(walked_encodings)307batched_encodings = ops.split(walked_encodings, batches)308309images = []310for batch in range(batches):311images += [312Image.fromarray(img)313for img in model.generate_image(314batched_encodings[batch],315batch_size=batch_size,316num_steps=25,317diffusion_noise=noise,318)319]320321export_as_gif("eiffel-tower-starry-night.gif", images, rubber_band=True)322323"""324325326Perhaps unsurprisingly, walking too far from the encoder's latent manifold327produces images that look incoherent. Try it for yourself by setting328your own prompt, and adjusting `step_size` to increase or decrease the magnitude329of the walk. Note that when the magnitude of the walk gets large, the walk often330leads into areas which produce extremely noisy images.331332## A circular walk through the diffusion noise space for a single prompt333334Our final experiment is to stick to one prompt and explore the variety of images335that the diffusion model can produce from that prompt. We do this by controlling336the noise that is used to seed the diffusion process.337338We create two noise components, `x` and `y`, and do a walk from 0 to 2π, summing339the cosine of our `x` component and the sin of our `y` component to produce noise.340Using this approach, the end of our walk arrives at the same noise inputs where341we began our walk, so we get a "loopable" result!342"""343344prompt = "An oil paintings of cows in a field next to a windmill in Holland"345encoding = ops.squeeze(model.encode_text(prompt))346walk_steps = 150347batch_size = 3348batches = walk_steps // batch_size349350walk_noise_x = keras.random.normal(noise.shape, dtype="float64")351walk_noise_y = keras.random.normal(noise.shape, dtype="float64")352353walk_scale_x = ops.cos(ops.linspace(0, 2, walk_steps) * math.pi)354walk_scale_y = ops.sin(ops.linspace(0, 2, walk_steps) * math.pi)355noise_x = ops.tensordot(walk_scale_x, walk_noise_x, axes=0)356noise_y = ops.tensordot(walk_scale_y, walk_noise_y, axes=0)357noise = ops.add(noise_x, noise_y)358batched_noise = ops.split(noise, batches)359360images = []361for batch in range(batches):362images += [363Image.fromarray(img)364for img in model.generate_image(365encoding,366batch_size=batch_size,367num_steps=25,368diffusion_noise=batched_noise[batch],369)370]371372export_as_gif("cows.gif", images)373374"""375376377Experiment with your own prompts and with different values of378`unconditional_guidance_scale`!379380## Conclusion381382Stable Diffusion offers a lot more than just single text-to-image generation.383Exploring the latent manifold of the text encoder and the noise space of the384diffusion model are two fun ways to experience the power of this model, and385KerasCV makes it easy!386"""387388389