Path: blob/master/examples/generative/ipynb/random_walks_with_stable_diffusion_3.ipynb
3508 views
A walk through latent space with Stable Diffusion 3
Authors: Hongyu Chiu, Ian Stenbit, fchollet, lukewood
Date created: 2024/11/11
Last modified: 2024/11/11
Description: Explore the latent manifold of Stable Diffusion 3.
Overview
Generative image models learn a "latent manifold" of the visual world: a low-dimensional vector space where each point maps to an image. Going from such a point on the manifold back to a displayable image is called "decoding" -- in the Stable Diffusion model, this is handled by the "decoder" model.
This latent manifold of images is continuous and interpolative, meaning that:
Moving a little on the manifold only changes the corresponding image a little (continuity).
For any two points A and B on the manifold (i.e. any two images), it is possible to move from A to B via a path where each intermediate point is also on the manifold (i.e. is also a valid image). Intermediate points would be called "interpolations" between the two starting images.
Stable Diffusion isn't just an image model, though, it's also a natural language model. It has two latent spaces: the image representation space learned by the encoder used during training, and the prompt latent space which is learned using a combination of pretraining and training-time fine-tuning.
Latent space walking, or latent space exploration, is the process of sampling a point in latent space and incrementally changing the latent representation. Its most common application is generating animations where each sampled point is fed to the decoder and is stored as a frame in the final animation. For high-quality latent representations, this produces coherent-looking animations. These animations can provide insight into the feature map of the latent space, and can ultimately lead to improvements in the training process. One such GIF is displayed below:
In this guide, we will show how to take advantage of the TextToImage API in KerasHub to perform prompt interpolation and circular walks through Stable Diffusion 3's visual latent manifold, as well as through the text encoder's latent manifold.
This guide assumes the reader has a high-level understanding of Stable Diffusion 3. If you haven't already, you should start by reading the Stable Diffusion 3 in KerasHub.
It is also worth noting that the preset "stable_diffusion_3_medium" excludes the T5XXL text encoder, as it requires significantly more GPU memory. The performace degradation is negligible in most cases. The weights, including T5XXL, will be available on KerasHub soon.
Let's define some helper functions for this example.
We are going to generate images using custom latents and embeddings, so we need to implement the generate_with_latents_and_embeddings
function. Additionally, it is important to compile this function to speed up the generation process.
Interpolating between text prompts
In Stable Diffusion 3, a text prompt is encoded into multiple vectors, which are then used to guide the diffusion process. These latent encoding vectors have shapes of 154x4096 and 2048 for both the positive and negative prompts - quite large! When we input a text prompt into Stable Diffusion 3, we generate images from a single point on this latent manifold.
To explore more of this manifold, we can interpolate between two text encodings and generate images at those interpolated points:
In this example, we want to use Spherical Linear Interpolation (slerp) instead of simple linear interpolation. Slerp is commonly used in computer graphics to animate rotations smoothly and can also be applied to interpolate between high-dimensional data points, such as latent vectors used in generative models.
The source is from Andrej Karpathy's gist: https://gist.github.com/karpathy/00103b0037c5aaea32fe1da1af553355.
A more detailed explanation of this method can be found at: https://en.wikipedia.org/wiki/Slerp.
Once we've interpolated the encodings, we can generate images from each point. Note that in order to maintain some stability between the resulting images we keep the diffusion latents constant between images.
Now that we've generated some interpolated images, let's take a look at them!
Throughout this tutorial, we're going to export sequences of images as gifs so that they can be easily viewed with some temporal context. For sequences of images where the first and last images don't match conceptually, we rubber-band the gif.
If you're running in Colab, you can view your own GIFs by running:
The results may seem surprising. Generally, interpolating between prompts produces coherent looking images, and often demonstrates a progressive concept shift between the contents of the two prompts. This is indicative of a high quality representation space, that closely mirrors the natural structure of the visual world.
To best visualize this, we should do a much more fine-grained interpolation, using more steps.
The resulting gif shows a much clearer and more coherent shift between the two prompts. Try out some prompts of your own and experiment!
We can even extend this concept for more than one image. For example, we can interpolate between four prompts:
Let's display the resulting images in a grid to make them easier to interpret.
We can also interpolate while allowing diffusion latents to vary by dropping the seed
parameter:
Next up -- let's go for some walks!
A walk around a text prompt
Our next experiment will be to go for a walk around the latent manifold starting from a point produced by a particular prompt.
Perhaps unsurprisingly, walking too far from the encoder's latent manifold produces images that look incoherent. Try it for yourself by setting your own prompt, and adjusting step_size
to increase or decrease the magnitude of the walk. Note that when the magnitude of the walk gets large, the walk often leads into areas which produce extremely noisy images.
A circular walk through the diffusion latent space for a single prompt
Our final experiment is to stick to one prompt and explore the variety of images that the diffusion model can produce from that prompt. We do this by controlling the noise that is used to seed the diffusion process.
We create two noise components, x
and y
, and do a walk from 0 to 2π, summing the cosine of our x
component and the sin of our y
component to produce noise. Using this approach, the end of our walk arrives at the same noise inputs where we began our walk, so we get a "loopable" result!
Experiment with your own prompts and with different values of the parameters!
Conclusion
Stable Diffusion 3 offers a lot more than just single text-to-image generation. Exploring the latent manifold of the text encoder and the latent space of the diffusion model are two fun ways to experience the power of this model, and KerasHub makes it easy!