Path: blob/main/transformers_doc/en/zero_shot_object_detection.ipynb
5906 views
Zero-shot object detection
Traditionally, models used for object detection require labeled image datasets for training, and are limited to detecting the set of classes from the training data.
Zero-shot object detection is a computer vision task to detect objects and their classes in images, without any prior training or knowledge of the classes. Zero-shot object detection models receive an image as input, as well as a list of candidate classes, and output the bounding boxes and labels where the objects have been detected.
[!NOTE] Hugging Face houses many such open vocabulary zero shot object detectors.
In this guide, you will learn how to use such models:
to detect objects based on text prompts
for batch object detection
for image-guided object detection
Before you begin, make sure you have all the necessary libraries installed:
Zero-shot object detection pipeline
The simplest way to try out inference with models is to use it in a pipeline(). Instantiate a pipeline for zero-shot object detection from a checkpoint on the Hugging Face Hub:
Next, choose an image you'd like to detect objects in. Here we'll use the image of astronaut Eileen Collins that is a part of the NASA Great Images dataset.

Pass the image and the candidate object labels to look for to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. We also pass text descriptions for all items we want to query the image for.
Let's visualize the predictions:

Text-prompted zero-shot object detection by hand
Now that you've seen how to use the zero-shot object detection pipeline, let's replicate the same result manually.
Start by loading the model and associated processor from a checkpoint on the Hugging Face Hub. Here we'll use the same checkpoint as before:
Let's take a different image to switch things up.

Use the processor to prepare the inputs for the model.
Pass the inputs through the model, post-process, and visualize the results. Since the image processor resized images before feeding them to the model, you need to use the post_process_object_detection
method to make sure the predicted bounding boxes have the correct coordinates relative to the original image:

Batch processing
You can pass multiple sets of images and text queries to search for different (or same) objects in several images. Let's use both an astronaut image and the beach image together. For batch processing, you should pass text queries as a nested list to the processor and images as lists of PIL images, PyTorch tensors, or NumPy arrays.
Previously for post-processing you passed the single image's size as a tensor, but you can also pass a tuple, or, in case of several images, a list of tuples. Let's create predictions for the two examples, and visualize the second one (image_idx = 1
).
Let's visualize the results:

Image-guided object detection
Unlike text queries, only a single example image is allowed.
Let's take an image with two cats on a couch as a target image, and an image of a single cat as a query:
Let's take a quick look at the images:

In the preprocessing step, instead of text queries, you now need to use query_images
:
For predictions, instead of passing the inputs to the model, pass them to image_guided_detection(). Draw the predictions as before except now there are no labels.
