Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
keras-team
GitHub Repository: keras-team/keras-io
Path: blob/master/examples/vision/fully_convolutional_network.py
3507 views
1
"""
2
Title: Image Segmentation using Composable Fully-Convolutional Networks
3
Author: [Suvaditya Mukherjee](https://twitter.com/halcyonrayes)
4
Date created: 2023/06/16
5
Last modified: 2023/12/25
6
Description: Using the Fully-Convolutional Network for Image Segmentation.
7
Accelerator: GPU
8
"""
9
10
"""
11
## Introduction
12
13
The following example walks through the steps to implement Fully-Convolutional Networks
14
for Image Segmentation on the Oxford-IIIT Pets dataset.
15
The model was proposed in the paper,
16
[Fully Convolutional Networks for Semantic Segmentation by Long et. al.(2014)](https://arxiv.org/abs/1411.4038).
17
Image segmentation is one of the most common and introductory tasks when it comes to
18
Computer Vision, where we extend the problem of Image Classification from
19
one-label-per-image to a pixel-wise classification problem.
20
In this example, we will assemble the aforementioned Fully-Convolutional Segmentation architecture,
21
capable of performing Image Segmentation.
22
The network extends the pooling layer outputs from the VGG in order to perform upsampling
23
and get a final result. The intermediate outputs coming from the 3rd, 4th and 5th Max-Pooling layers from VGG19 are
24
extracted out and upsampled at different levels and factors to get a final output with the same shape as that
25
of the output, but with the class of each pixel present at each location, instead of pixel intensity values.
26
Different intermediate pool layers are extracted and processed upon for different versions of the network.
27
The FCN architecture has 3 versions of differing quality.
28
29
- FCN-32S
30
- FCN-16S
31
- FCN-8S
32
33
All versions of the model derive their outputs through an iterative processing of
34
successive intermediate pool layers of the main backbone used.
35
A better idea can be gained from the figure below.
36
37
| ![FCN Architecture](https://i.imgur.com/Ttros06.png) |
38
| :--: |
39
| **Diagram 1**: Combined Architecture Versions (Source: Paper) |
40
41
To get a better idea on Image Segmentation or find more pre-trained models, feel free to
42
navigate to the [Hugging Face Image Segmentation Models](https://huggingface.co/models?pipeline_tag=image-segmentation) page,
43
or a [PyImageSearch Blog on Semantic Segmentation](https://pyimagesearch.com/2018/09/03/semantic-segmentation-with-opencv-and-deep-learning/)
44
45
"""
46
47
"""
48
## Setup Imports
49
"""
50
51
import os
52
53
os.environ["KERAS_BACKEND"] = "tensorflow"
54
import keras
55
from keras import ops
56
import tensorflow as tf
57
import matplotlib.pyplot as plt
58
import tensorflow_datasets as tfds
59
import numpy as np
60
61
AUTOTUNE = tf.data.AUTOTUNE
62
63
"""
64
## Set configurations for notebook variables
65
66
We set the required parameters for the experiment.
67
The chosen dataset has a total of 4 classes per image, with regards to the segmentation mask.
68
We also set our hyperparameters in this cell.
69
70
Mixed Precision as an option is also available in systems which support it, to reduce
71
load.
72
This would make most tensors use `16-bit float` values instead of `32-bit float`
73
values, in places where it will not adversely affect computation.
74
This means, during computation, TensorFlow will use `16-bit float` Tensors to increase speed at the cost of precision,
75
while storing the values in their original default `32-bit float` form.
76
"""
77
78
NUM_CLASSES = 4
79
INPUT_HEIGHT = 224
80
INPUT_WIDTH = 224
81
LEARNING_RATE = 1e-3
82
WEIGHT_DECAY = 1e-4
83
EPOCHS = 20
84
BATCH_SIZE = 32
85
MIXED_PRECISION = True
86
SHUFFLE = True
87
88
# Mixed-precision setting
89
if MIXED_PRECISION:
90
policy = keras.mixed_precision.Policy("mixed_float16")
91
keras.mixed_precision.set_global_policy(policy)
92
93
"""
94
## Load dataset
95
96
We make use of the [Oxford-IIIT Pets dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/)
97
which contains a total of 7,349 samples and their segmentation masks.
98
We have 37 classes, with roughly 200 samples per class.
99
Our training and validation dataset has 3,128 and 552 samples respectively.
100
Aside from this, our test split has a total of 3,669 samples.
101
102
We set a `batch_size` parameter that will batch our samples together, use a `shuffle`
103
parameter to mix our samples together.
104
"""
105
106
(train_ds, valid_ds, test_ds) = tfds.load(
107
"oxford_iiit_pet",
108
split=["train[:85%]", "train[85%:]", "test"],
109
batch_size=BATCH_SIZE,
110
shuffle_files=SHUFFLE,
111
)
112
113
"""
114
## Unpack and preprocess dataset
115
116
We define a simple function that includes performs Resizing over our
117
training, validation and test datasets.
118
We do the same process on the masks as well, to make sure both are aligned in terms of shape and size.
119
"""
120
121
122
# Image and Mask Pre-processing
123
def unpack_resize_data(section):
124
image = section["image"]
125
segmentation_mask = section["segmentation_mask"]
126
127
resize_layer = keras.layers.Resizing(INPUT_HEIGHT, INPUT_WIDTH)
128
129
image = resize_layer(image)
130
segmentation_mask = resize_layer(segmentation_mask)
131
132
return image, segmentation_mask
133
134
135
train_ds = train_ds.map(unpack_resize_data, num_parallel_calls=AUTOTUNE)
136
valid_ds = valid_ds.map(unpack_resize_data, num_parallel_calls=AUTOTUNE)
137
test_ds = test_ds.map(unpack_resize_data, num_parallel_calls=AUTOTUNE)
138
"""
139
## Visualize one random sample from the pre-processed dataset
140
141
We visualize what a random sample in our test split of the dataset looks like, and plot
142
the segmentation mask on top to see the effective mask areas.
143
Note that we have performed pre-processing on this dataset too,
144
which makes the image and mask size same.
145
"""
146
147
# Select random image and mask. Cast to NumPy array
148
# for Matplotlib visualization.
149
150
images, masks = next(iter(test_ds))
151
random_idx = keras.random.uniform([], minval=0, maxval=BATCH_SIZE, seed=10)
152
153
test_image = images[int(random_idx)].numpy().astype("float")
154
test_mask = masks[int(random_idx)].numpy().astype("float")
155
156
# Overlay segmentation mask on top of image.
157
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
158
159
ax[0].set_title("Image")
160
ax[0].imshow(test_image / 255.0)
161
162
ax[1].set_title("Image with segmentation mask overlay")
163
ax[1].imshow(test_image / 255.0)
164
ax[1].imshow(
165
test_mask,
166
cmap="inferno",
167
alpha=0.6,
168
)
169
plt.show()
170
171
"""
172
## Perform VGG-specific pre-processing
173
174
`keras.applications.VGG19` requires the use of a `preprocess_input` function that will
175
pro-actively perform Image-net style Standard Deviation Normalization scheme.
176
"""
177
178
179
def preprocess_data(image, segmentation_mask):
180
image = keras.applications.vgg19.preprocess_input(image)
181
182
return image, segmentation_mask
183
184
185
train_ds = (
186
train_ds.map(preprocess_data, num_parallel_calls=AUTOTUNE)
187
.shuffle(buffer_size=1024)
188
.prefetch(buffer_size=1024)
189
)
190
valid_ds = (
191
valid_ds.map(preprocess_data, num_parallel_calls=AUTOTUNE)
192
.shuffle(buffer_size=1024)
193
.prefetch(buffer_size=1024)
194
)
195
test_ds = (
196
test_ds.map(preprocess_data, num_parallel_calls=AUTOTUNE)
197
.shuffle(buffer_size=1024)
198
.prefetch(buffer_size=1024)
199
)
200
"""
201
## Model Definition
202
203
The Fully-Convolutional Network boasts a simple architecture composed of only
204
`keras.layers.Conv2D` Layers, `keras.layers.Dense` layers and `keras.layers.Dropout`
205
layers.
206
207
| ![FCN Architecture](https://i.imgur.com/PerTKjf.png) |
208
| :--: |
209
| **Diagram 2**: Generic FCN Forward Pass (Source: Paper)|
210
211
Pixel-wise prediction is performed by having a Softmax Convolutional layer with the same
212
size of the image, such that we can perform direct comparison
213
We can find several important metrics such as Accuracy and Mean-Intersection-over-Union on the network.
214
"""
215
216
"""
217
### Backbone (VGG-19)
218
219
We use the [VGG-19 network](https://keras.io/api/applications/vgg/) as the backbone, as
220
the paper suggests it to be one of the most effective backbones for this network.
221
We extract different outputs from the network by making use of `keras.models.Model`.
222
Following this, we add layers on top to make a network perfectly simulating that of
223
Diagram 1.
224
The backbone's `keras.layers.Dense` layers will be converted to `keras.layers.Conv2D`
225
layers based on the [original Caffe code present here.](https://github.com/linxi159/FCN-caffe/blob/master/pascalcontext-fcn16s/net.py)
226
All 3 networks will share the same backbone weights, but will have differing results
227
based on their extensions.
228
We make the backbone non-trainable to improve training time requirements.
229
It is also noted in the paper that making the network trainable does not yield major benefits.
230
"""
231
232
input_layer = keras.Input(shape=(INPUT_HEIGHT, INPUT_WIDTH, 3))
233
234
# VGG Model backbone with pre-trained ImageNet weights.
235
vgg_model = keras.applications.vgg19.VGG19(include_top=True, weights="imagenet")
236
237
# Extracting different outputs from same model
238
fcn_backbone = keras.models.Model(
239
inputs=vgg_model.layers[1].input,
240
outputs=[
241
vgg_model.get_layer(block_name).output
242
for block_name in ["block3_pool", "block4_pool", "block5_pool"]
243
],
244
)
245
246
# Setting backbone to be non-trainable
247
fcn_backbone.trainable = False
248
249
x = fcn_backbone(input_layer)
250
251
# Converting Dense layers to Conv2D layers
252
units = [4096, 4096]
253
dense_convs = []
254
255
for filter_idx in range(len(units)):
256
dense_conv = keras.layers.Conv2D(
257
filters=units[filter_idx],
258
kernel_size=(7, 7) if filter_idx == 0 else (1, 1),
259
strides=(1, 1),
260
activation="relu",
261
padding="same",
262
use_bias=False,
263
kernel_initializer=keras.initializers.Constant(1.0),
264
)
265
dense_convs.append(dense_conv)
266
dropout_layer = keras.layers.Dropout(0.5)
267
dense_convs.append(dropout_layer)
268
269
dense_convs = keras.Sequential(dense_convs)
270
dense_convs.trainable = False
271
272
x[-1] = dense_convs(x[-1])
273
274
pool3_output, pool4_output, pool5_output = x
275
276
"""
277
### FCN-32S
278
279
We extend the last output, perform a `1x1 Convolution` and perform 2D Bilinear Upsampling
280
by a factor of 32 to get an image of the same size as that of our input.
281
We use a simple `keras.layers.UpSampling2D` layer over a `keras.layers.Conv2DTranspose`
282
since it yields performance benefits from being a deterministic mathematical operation
283
over a Convolutional operation
284
It is also noted in the paper that making the Up-sampling parameters trainable does not yield benefits.
285
Original experiments of the paper used Upsampling as well.
286
"""
287
288
# 1x1 convolution to set channels = number of classes
289
pool5 = keras.layers.Conv2D(
290
filters=NUM_CLASSES,
291
kernel_size=(1, 1),
292
padding="same",
293
strides=(1, 1),
294
activation="relu",
295
)
296
297
# Get Softmax outputs for all classes
298
fcn32s_conv_layer = keras.layers.Conv2D(
299
filters=NUM_CLASSES,
300
kernel_size=(1, 1),
301
activation="softmax",
302
padding="same",
303
strides=(1, 1),
304
)
305
306
# Up-sample to original image size
307
fcn32s_upsampling = keras.layers.UpSampling2D(
308
size=(32, 32),
309
data_format=keras.backend.image_data_format(),
310
interpolation="bilinear",
311
)
312
313
final_fcn32s_pool = pool5(pool5_output)
314
final_fcn32s_output = fcn32s_conv_layer(final_fcn32s_pool)
315
final_fcn32s_output = fcn32s_upsampling(final_fcn32s_output)
316
317
fcn32s_model = keras.Model(inputs=input_layer, outputs=final_fcn32s_output)
318
319
"""
320
### FCN-16S
321
322
The pooling output from the FCN-32S is extended and added to the 4th-level Pooling output
323
of our backbone.
324
Following this, we upsample by a factor of 16 to get image of the same
325
size as that of our input.
326
"""
327
328
# 1x1 convolution to set channels = number of classes
329
# Followed from the original Caffe implementation
330
pool4 = keras.layers.Conv2D(
331
filters=NUM_CLASSES,
332
kernel_size=(1, 1),
333
padding="same",
334
strides=(1, 1),
335
activation="linear",
336
kernel_initializer=keras.initializers.Zeros(),
337
)(pool4_output)
338
339
# Intermediate up-sample
340
pool5 = keras.layers.UpSampling2D(
341
size=(2, 2),
342
data_format=keras.backend.image_data_format(),
343
interpolation="bilinear",
344
)(final_fcn32s_pool)
345
346
# Get Softmax outputs for all classes
347
fcn16s_conv_layer = keras.layers.Conv2D(
348
filters=NUM_CLASSES,
349
kernel_size=(1, 1),
350
activation="softmax",
351
padding="same",
352
strides=(1, 1),
353
)
354
355
# Up-sample to original image size
356
fcn16s_upsample_layer = keras.layers.UpSampling2D(
357
size=(16, 16),
358
data_format=keras.backend.image_data_format(),
359
interpolation="bilinear",
360
)
361
362
# Add intermediate outputs
363
final_fcn16s_pool = keras.layers.Add()([pool4, pool5])
364
final_fcn16s_output = fcn16s_conv_layer(final_fcn16s_pool)
365
final_fcn16s_output = fcn16s_upsample_layer(final_fcn16s_output)
366
367
fcn16s_model = keras.models.Model(inputs=input_layer, outputs=final_fcn16s_output)
368
369
"""
370
### FCN-8S
371
372
The pooling output from the FCN-16S is extended once more, and added from the 3rd-level
373
Pooling output of our backbone.
374
This result is upsampled by a factor of 8 to get an image of the same size as that of our input.
375
"""
376
377
# 1x1 convolution to set channels = number of classes
378
# Followed from the original Caffe implementation
379
pool3 = keras.layers.Conv2D(
380
filters=NUM_CLASSES,
381
kernel_size=(1, 1),
382
padding="same",
383
strides=(1, 1),
384
activation="linear",
385
kernel_initializer=keras.initializers.Zeros(),
386
)(pool3_output)
387
388
# Intermediate up-sample
389
intermediate_pool_output = keras.layers.UpSampling2D(
390
size=(2, 2),
391
data_format=keras.backend.image_data_format(),
392
interpolation="bilinear",
393
)(final_fcn16s_pool)
394
395
# Get Softmax outputs for all classes
396
fcn8s_conv_layer = keras.layers.Conv2D(
397
filters=NUM_CLASSES,
398
kernel_size=(1, 1),
399
activation="softmax",
400
padding="same",
401
strides=(1, 1),
402
)
403
404
# Up-sample to original image size
405
fcn8s_upsample_layer = keras.layers.UpSampling2D(
406
size=(8, 8),
407
data_format=keras.backend.image_data_format(),
408
interpolation="bilinear",
409
)
410
411
# Add intermediate outputs
412
final_fcn8s_pool = keras.layers.Add()([pool3, intermediate_pool_output])
413
final_fcn8s_output = fcn8s_conv_layer(final_fcn8s_pool)
414
final_fcn8s_output = fcn8s_upsample_layer(final_fcn8s_output)
415
416
fcn8s_model = keras.models.Model(inputs=input_layer, outputs=final_fcn8s_output)
417
418
"""
419
### Load weights into backbone
420
421
It was noted in the paper, as well as through experimentation that extracting the weights
422
of the last 2 Fully-connected Dense layers from the backbone, reshaping the weights to
423
fit that of the `keras.layers.Dense` layers we had previously converted into
424
`keras.layers.Conv2D`, and setting them to it yields far better results and a significant
425
increase in mIOU performance.
426
"""
427
428
# VGG's last 2 layers
429
weights1 = vgg_model.get_layer("fc1").get_weights()[0]
430
weights2 = vgg_model.get_layer("fc2").get_weights()[0]
431
432
weights1 = weights1.reshape(7, 7, 512, 4096)
433
weights2 = weights2.reshape(1, 1, 4096, 4096)
434
435
dense_convs.layers[0].set_weights([weights1])
436
dense_convs.layers[2].set_weights([weights2])
437
438
"""
439
## Training
440
441
The original paper talks about making use of [SGD with Momentum](https://keras.io/api/optimizers/sgd/) as the optimizer of choice.
442
But it was noticed during experimentation that
443
[AdamW](https://keras.io/api/optimizers/adamw/)
444
yielded better results in terms of mIOU and Pixel-wise Accuracy.
445
"""
446
447
"""
448
### FCN-32S
449
"""
450
451
fcn32s_optimizer = keras.optimizers.AdamW(
452
learning_rate=LEARNING_RATE, weight_decay=WEIGHT_DECAY
453
)
454
455
fcn32s_loss = keras.losses.SparseCategoricalCrossentropy()
456
457
# Maintain mIOU and Pixel-wise Accuracy as metrics
458
fcn32s_model.compile(
459
optimizer=fcn32s_optimizer,
460
loss=fcn32s_loss,
461
metrics=[
462
keras.metrics.MeanIoU(num_classes=NUM_CLASSES, sparse_y_pred=False),
463
keras.metrics.SparseCategoricalAccuracy(),
464
],
465
)
466
467
fcn32s_history = fcn32s_model.fit(train_ds, epochs=EPOCHS, validation_data=valid_ds)
468
469
"""
470
### FCN-16S
471
"""
472
473
fcn16s_optimizer = keras.optimizers.AdamW(
474
learning_rate=LEARNING_RATE, weight_decay=WEIGHT_DECAY
475
)
476
477
fcn16s_loss = keras.losses.SparseCategoricalCrossentropy()
478
479
# Maintain mIOU and Pixel-wise Accuracy as metrics
480
fcn16s_model.compile(
481
optimizer=fcn16s_optimizer,
482
loss=fcn16s_loss,
483
metrics=[
484
keras.metrics.MeanIoU(num_classes=NUM_CLASSES, sparse_y_pred=False),
485
keras.metrics.SparseCategoricalAccuracy(),
486
],
487
)
488
489
fcn16s_history = fcn16s_model.fit(train_ds, epochs=EPOCHS, validation_data=valid_ds)
490
491
"""
492
### FCN-8S
493
"""
494
495
fcn8s_optimizer = keras.optimizers.AdamW(
496
learning_rate=LEARNING_RATE, weight_decay=WEIGHT_DECAY
497
)
498
499
fcn8s_loss = keras.losses.SparseCategoricalCrossentropy()
500
501
# Maintain mIOU and Pixel-wise Accuracy as metrics
502
fcn8s_model.compile(
503
optimizer=fcn8s_optimizer,
504
loss=fcn8s_loss,
505
metrics=[
506
keras.metrics.MeanIoU(num_classes=NUM_CLASSES, sparse_y_pred=False),
507
keras.metrics.SparseCategoricalAccuracy(),
508
],
509
)
510
511
fcn8s_history = fcn8s_model.fit(train_ds, epochs=EPOCHS, validation_data=valid_ds)
512
"""
513
## Visualizations
514
"""
515
516
"""
517
### Plotting metrics for training run
518
519
We perform a comparative study between all 3 versions of the model by tracking training
520
and validation metrics of Accuracy, Loss and Mean IoU.
521
"""
522
523
total_plots = len(fcn32s_history.history)
524
cols = total_plots // 2
525
526
rows = total_plots // cols
527
528
if total_plots % cols != 0:
529
rows += 1
530
531
# Set all history dictionary objects
532
fcn32s_dict = fcn32s_history.history
533
fcn16s_dict = fcn16s_history.history
534
fcn8s_dict = fcn8s_history.history
535
536
pos = range(1, total_plots + 1)
537
plt.figure(figsize=(15, 10))
538
539
for i, ((key_32s, value_32s), (key_16s, value_16s), (key_8s, value_8s)) in enumerate(
540
zip(fcn32s_dict.items(), fcn16s_dict.items(), fcn8s_dict.items())
541
):
542
plt.subplot(rows, cols, pos[i])
543
plt.plot(range(len(value_32s)), value_32s)
544
plt.plot(range(len(value_16s)), value_16s)
545
plt.plot(range(len(value_8s)), value_8s)
546
plt.title(str(key_32s) + " (combined)")
547
plt.legend(["FCN-32S", "FCN-16S", "FCN-8S"])
548
549
plt.show()
550
551
"""
552
### Visualizing predicted segmentation masks
553
554
To understand the results and see them better, we pick a random image from the test
555
dataset and perform inference on it to see the masks generated by each model.
556
Note: For better results, the model must be trained for a higher number of epochs.
557
"""
558
559
images, masks = next(iter(test_ds))
560
random_idx = keras.random.uniform([], minval=0, maxval=BATCH_SIZE, seed=10)
561
562
# Get random test image and mask
563
test_image = images[int(random_idx)].numpy().astype("float")
564
test_mask = masks[int(random_idx)].numpy().astype("float")
565
566
pred_image = ops.expand_dims(test_image, axis=0)
567
pred_image = keras.applications.vgg19.preprocess_input(pred_image)
568
569
# Perform inference on FCN-32S
570
pred_mask_32s = fcn32s_model.predict(pred_image, verbose=0).astype("float")
571
pred_mask_32s = np.argmax(pred_mask_32s, axis=-1)
572
pred_mask_32s = pred_mask_32s[0, ...]
573
574
# Perform inference on FCN-16S
575
pred_mask_16s = fcn16s_model.predict(pred_image, verbose=0).astype("float")
576
pred_mask_16s = np.argmax(pred_mask_16s, axis=-1)
577
pred_mask_16s = pred_mask_16s[0, ...]
578
579
# Perform inference on FCN-8S
580
pred_mask_8s = fcn8s_model.predict(pred_image, verbose=0).astype("float")
581
pred_mask_8s = np.argmax(pred_mask_8s, axis=-1)
582
pred_mask_8s = pred_mask_8s[0, ...]
583
584
# Plot all results
585
fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(15, 8))
586
587
fig.delaxes(ax[0, 2])
588
589
ax[0, 0].set_title("Image")
590
ax[0, 0].imshow(test_image / 255.0)
591
592
ax[0, 1].set_title("Image with ground truth overlay")
593
ax[0, 1].imshow(test_image / 255.0)
594
ax[0, 1].imshow(
595
test_mask,
596
cmap="inferno",
597
alpha=0.6,
598
)
599
600
ax[1, 0].set_title("Image with FCN-32S mask overlay")
601
ax[1, 0].imshow(test_image / 255.0)
602
ax[1, 0].imshow(pred_mask_32s, cmap="inferno", alpha=0.6)
603
604
ax[1, 1].set_title("Image with FCN-16S mask overlay")
605
ax[1, 1].imshow(test_image / 255.0)
606
ax[1, 1].imshow(pred_mask_16s, cmap="inferno", alpha=0.6)
607
608
ax[1, 2].set_title("Image with FCN-8S mask overlay")
609
ax[1, 2].imshow(test_image / 255.0)
610
ax[1, 2].imshow(pred_mask_8s, cmap="inferno", alpha=0.6)
611
612
plt.show()
613
614
"""
615
## Conclusion
616
617
The Fully-Convolutional Network is an exceptionally simple network that has yielded
618
strong results in Image Segmentation tasks across different benchmarks.
619
With the advent of better mechanisms like [Attention](https://arxiv.org/abs/1706.03762) as used in
620
[SegFormer](https://arxiv.org/abs/2105.15203) and
621
[DeTR](https://arxiv.org/abs/2005.12872), this model serves as a quick way to iterate and
622
find baselines for this task on unknown data.
623
"""
624
625
"""
626
## Acknowledgements
627
628
I thank [Aritra Roy Gosthipaty](https://twitter.com/ariG23498), [Ayush
629
Thakur](https://twitter.com/ayushthakur0) and [Ritwik
630
Raha](https://twitter.com/ritwik_raha) for giving a preliminary review of the example.
631
I also thank the [Google Developer
632
Experts](https://developers.google.com/community/experts) program.
633
634
"""
635
636