Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
keras-team
GitHub Repository: keras-team/keras-io
Path: blob/master/examples/vision/learnable_resizer.py
3507 views
1
"""
2
Title: Learning to Resize in Computer Vision
3
Author: [Sayak Paul](https://twitter.com/RisingSayak)
4
Date created: 2021/04/30
5
Last modified: 2023/12/18
6
Description: How to optimally learn representations of images for a given resolution.
7
Accelerator: GPU
8
"""
9
10
"""
11
It is a common belief that if we constrain vision models to perceive things as humans do,
12
their performance can be improved. For example, in [this work](https://arxiv.org/abs/1811.12231),
13
Geirhos et al. showed that the vision models pre-trained on the ImageNet-1k dataset are
14
biased towards texture, whereas human beings mostly use the shape descriptor to develop a
15
common perception. But does this belief always apply, especially when it comes to improving
16
the performance of vision models?
17
18
It turns out it may not always be the case. When training vision models, it is common to
19
resize images to a lower dimension ((224 x 224), (299 x 299), etc.) to allow mini-batch
20
learning and also to keep up the compute limitations. We generally make use of image
21
resizing methods like **bilinear interpolation** for this step and the resized images do
22
not lose much of their perceptual character to the human eyes. In
23
[Learning to Resize Images for Computer Vision Tasks](https://arxiv.org/abs/2103.09950v1), Talebi et al. show
24
that if we try to optimize the perceptual quality of the images for the vision models
25
rather than the human eyes, their performance can further be improved. They investigate
26
the following question:
27
28
**For a given image resolution and a model, how to best resize the given images?**
29
30
As shown in the paper, this idea helps to consistently improve the performance of the
31
common vision models (pre-trained on ImageNet-1k) like DenseNet-121, ResNet-50,
32
MobileNetV2, and EfficientNets. In this example, we will implement the learnable image
33
resizing module as proposed in the paper and demonstrate that on the
34
[Cats and Dogs dataset](https://www.microsoft.com/en-us/download/details.aspx?id=54765)
35
using the [DenseNet-121](https://arxiv.org/abs/1608.06993) architecture.
36
"""
37
38
"""
39
## Setup
40
"""
41
42
import os
43
44
os.environ["KERAS_BACKEND"] = "tensorflow"
45
import keras
46
from keras import ops
47
from keras import layers
48
import tensorflow as tf
49
50
import tensorflow_datasets as tfds
51
52
tfds.disable_progress_bar()
53
54
import matplotlib.pyplot as plt
55
import numpy as np
56
57
"""
58
## Define hyperparameters
59
"""
60
61
"""
62
In order to facilitate mini-batch learning, we need to have a fixed shape for the images
63
inside a given batch. This is why an initial resizing is required. We first resize all
64
the images to (300 x 300) shape and then learn their optimal representation for the
65
(150 x 150) resolution.
66
"""
67
68
INP_SIZE = (300, 300)
69
TARGET_SIZE = (150, 150)
70
INTERPOLATION = "bilinear"
71
72
AUTO = tf.data.AUTOTUNE
73
BATCH_SIZE = 64
74
EPOCHS = 5
75
76
"""
77
In this example, we will use the bilinear interpolation but the learnable image resizer
78
module is not dependent on any specific interpolation method. We can also use others,
79
such as bicubic.
80
"""
81
82
"""
83
## Load and prepare the dataset
84
85
For this example, we will only use 40% of the total training dataset.
86
"""
87
88
train_ds, validation_ds = tfds.load(
89
"cats_vs_dogs",
90
# Reserve 10% for validation
91
split=["train[:40%]", "train[40%:50%]"],
92
as_supervised=True,
93
)
94
95
96
def preprocess_dataset(image, label):
97
image = ops.image.resize(image, (INP_SIZE[0], INP_SIZE[1]))
98
label = ops.one_hot(label, num_classes=2)
99
return (image, label)
100
101
102
train_ds = (
103
train_ds.shuffle(BATCH_SIZE * 100)
104
.map(preprocess_dataset, num_parallel_calls=AUTO)
105
.batch(BATCH_SIZE)
106
.prefetch(AUTO)
107
)
108
validation_ds = (
109
validation_ds.map(preprocess_dataset, num_parallel_calls=AUTO)
110
.batch(BATCH_SIZE)
111
.prefetch(AUTO)
112
)
113
114
"""
115
## Define the learnable resizer utilities
116
117
The figure below (courtesy: [Learning to Resize Images for Computer Vision Tasks](https://arxiv.org/abs/2103.09950v1))
118
presents the structure of the learnable resizing module:
119
120
![](https://i.ibb.co/gJYtSs0/image.png)
121
"""
122
123
124
def conv_block(x, filters, kernel_size, strides, activation=layers.LeakyReLU(0.2)):
125
x = layers.Conv2D(filters, kernel_size, strides, padding="same", use_bias=False)(x)
126
x = layers.BatchNormalization()(x)
127
if activation:
128
x = activation(x)
129
return x
130
131
132
def res_block(x):
133
inputs = x
134
x = conv_block(x, 16, 3, 1)
135
x = conv_block(x, 16, 3, 1, activation=None)
136
return layers.Add()([inputs, x])
137
138
# Note: user can change num_res_blocks to >1 also if needed
139
140
141
def get_learnable_resizer(filters=16, num_res_blocks=1, interpolation=INTERPOLATION):
142
inputs = layers.Input(shape=[None, None, 3])
143
144
# First, perform naive resizing.
145
naive_resize = layers.Resizing(*TARGET_SIZE, interpolation=interpolation)(inputs)
146
147
# First convolution block without batch normalization.
148
x = layers.Conv2D(filters=filters, kernel_size=7, strides=1, padding="same")(inputs)
149
x = layers.LeakyReLU(0.2)(x)
150
151
# Second convolution block with batch normalization.
152
x = layers.Conv2D(filters=filters, kernel_size=1, strides=1, padding="same")(x)
153
x = layers.LeakyReLU(0.2)(x)
154
x = layers.BatchNormalization()(x)
155
156
# Intermediate resizing as a bottleneck.
157
bottleneck = layers.Resizing(*TARGET_SIZE, interpolation=interpolation)(x)
158
159
# Residual passes.
160
# First res_block will get bottleneck output as input
161
x = res_block(bottleneck)
162
# Remaining res_blocks will get previous res_block output as input
163
for _ in range(num_res_blocks - 1):
164
x = res_block(x)
165
166
# Projection.
167
x = layers.Conv2D(
168
filters=filters, kernel_size=3, strides=1, padding="same", use_bias=False
169
)(x)
170
x = layers.BatchNormalization()(x)
171
172
# Skip connection.
173
x = layers.Add()([bottleneck, x])
174
175
# Final resized image.
176
x = layers.Conv2D(filters=3, kernel_size=7, strides=1, padding="same")(x)
177
final_resize = layers.Add()([naive_resize, x])
178
179
return keras.Model(inputs, final_resize, name="learnable_resizer")
180
181
182
learnable_resizer = get_learnable_resizer()
183
184
"""
185
## Visualize the outputs of the learnable resizing module
186
187
Here, we visualize how the resized images would look like after being passed through the
188
random weights of the resizer.
189
"""
190
191
sample_images, _ = next(iter(train_ds))
192
193
194
plt.figure(figsize=(16, 10))
195
for i, image in enumerate(sample_images[:6]):
196
image = image / 255
197
198
ax = plt.subplot(3, 4, 2 * i + 1)
199
plt.title("Input Image")
200
plt.imshow(image.numpy().squeeze())
201
plt.axis("off")
202
203
ax = plt.subplot(3, 4, 2 * i + 2)
204
resized_image = learnable_resizer(image[None, ...])
205
plt.title("Resized Image")
206
plt.imshow(resized_image.numpy().squeeze())
207
plt.axis("off")
208
209
"""
210
## Model building utility
211
"""
212
213
214
def get_model():
215
backbone = keras.applications.DenseNet121(
216
weights=None,
217
include_top=True,
218
classes=2,
219
input_shape=((TARGET_SIZE[0], TARGET_SIZE[1], 3)),
220
)
221
backbone.trainable = True
222
223
inputs = layers.Input((INP_SIZE[0], INP_SIZE[1], 3))
224
x = layers.Rescaling(scale=1.0 / 255)(inputs)
225
x = learnable_resizer(x)
226
outputs = backbone(x)
227
228
return keras.Model(inputs, outputs)
229
230
231
"""
232
The structure of the learnable image resizer module allows for flexible integrations with
233
different vision models.
234
"""
235
236
"""
237
## Compile and train our model with learnable resizer
238
"""
239
240
model = get_model()
241
model.compile(
242
loss=keras.losses.CategoricalCrossentropy(label_smoothing=0.1),
243
optimizer="sgd",
244
metrics=["accuracy"],
245
)
246
model.fit(train_ds, validation_data=validation_ds, epochs=EPOCHS)
247
248
"""
249
## Visualize the outputs of the trained visualizer
250
"""
251
252
plt.figure(figsize=(16, 10))
253
for i, image in enumerate(sample_images[:6]):
254
image = image / 255
255
256
ax = plt.subplot(3, 4, 2 * i + 1)
257
plt.title("Input Image")
258
plt.imshow(image.numpy().squeeze())
259
plt.axis("off")
260
261
ax = plt.subplot(3, 4, 2 * i + 2)
262
resized_image = learnable_resizer(image[None, ...])
263
plt.title("Resized Image")
264
plt.imshow(resized_image.numpy().squeeze() / 10)
265
plt.axis("off")
266
267
"""
268
The plot shows that the visuals of the images have improved with training. The following
269
table shows the benefits of using the resizing module in comparison to using the bilinear
270
interpolation:
271
272
| Model | Number of parameters (Million) | Top-1 accuracy |
273
|:-------------------------: |:-------------------------------: |:--------------: |
274
| With the learnable resizer | 7.051717 | 67.67% |
275
| Without the learnable resizer | 7.039554 | 60.19% |
276
277
For more details, you can check out [this repository](https://github.com/sayakpaul/Learnable-Image-Resizing).
278
Note the above-reported models were trained for 10 epochs on 90% of the training set of
279
Cats and Dogs unlike this example. Also, note that the increase in the number of
280
parameters due to the resizing module is very negligible. To ensure that the improvement
281
in the performance is not due to stochasticity, the models were trained using the same
282
initial random weights.
283
284
Now, a question worth asking here is - _isn't the improved accuracy simply a consequence
285
of adding more layers (the resizer is a mini network after all) to the model, compared to
286
the baseline?_
287
288
To show that it is not the case, the authors conduct the following experiment:
289
290
* Take a pre-trained model trained some size, say (224 x 224).
291
292
* Now, first, use it to infer predictions on images resized to a lower resolution. Record
293
the performance.
294
295
* For the second experiment, plug in the resizer module at the top of the pre-trained
296
model and warm-start the training. Record the performance.
297
298
Now, the authors argue that using the second option is better because it helps the model
299
learn how to adjust the representations better with respect to the given resolution.
300
Since the results purely are empirical, a few more experiments such as analyzing the
301
cross-channel interaction would have been even better. It is worth noting that elements
302
like [Squeeze and Excitation (SE) blocks](https://arxiv.org/abs/1709.01507), [Global Context (GC) blocks](https://arxiv.org/abs/1904.11492) also add a few
303
parameters to an existing network but they are known to help a network process
304
information in systematic ways to improve the overall performance.
305
"""
306
307
"""
308
## Notes
309
310
* To impose shape bias inside the vision models, Geirhos et al. trained them with a
311
combination of natural and stylized images. It might be interesting to investigate if
312
this learnable resizing module could achieve something similar as the outputs seem to
313
discard the texture information.
314
315
* The resizer module can handle arbitrary resolutions and aspect ratios which is very
316
important for tasks like object detection and segmentation.
317
318
* There is another closely related topic on ***adaptive image resizing*** that attempts
319
to resize images/feature maps adaptively during training. [EfficientV2](https://arxiv.org/abs/2104.00298)
320
uses this idea.
321
"""
322
323