Path: blob/main/diffusers_doc/en/pytorch/depth2img.ipynb
5906 views
Kernel: Unknown Kernel
Text-guided depth-to-image generation
The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map
to preserve the image structure. If no depth_map
is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model.
Start by creating an instance of the StableDiffusionDepth2ImgPipeline:
In [ ]:
Now pass your prompt to the pipeline. You can also pass a negative_prompt
to prevent certain words from guiding how an image is generated:
In [ ]:
Input | Output |
---|---|
![]() | ![]() |