CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
amanchadha

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: amanchadha/coursera-deep-learning-specialization
Path: blob/master/C4 - Convolutional Neural Networks/Week 1/Convolution_model_Application 2022.py
Views: 4802
1
#!/usr/bin/env python
2
# coding: utf-8
3
4
# # Convolutional Neural Networks: Application
5
#
6
# Welcome to Course 4's second assignment! In this notebook, you will:
7
#
8
# - Create a mood classifer using the TF Keras Sequential API
9
# - Build a ConvNet to identify sign language digits using the TF Keras Functional API
10
#
11
# **After this assignment you will be able to:**
12
#
13
# - Build and train a ConvNet in TensorFlow for a __binary__ classification problem
14
# - Build and train a ConvNet in TensorFlow for a __multiclass__ classification problem
15
# - Explain different use cases for the Sequential and Functional APIs
16
#
17
# To complete this assignment, you should already be familiar with TensorFlow. If you are not, please refer back to the **TensorFlow Tutorial** of the third week of Course 2 ("**Improving deep neural networks**").
18
19
# ## Table of Contents
20
#
21
# - [1 - Packages](#1)
22
# - [1.1 - Load the Data and Split the Data into Train/Test Sets](#1-1)
23
# - [2 - Layers in TF Keras](#2)
24
# - [3 - The Sequential API](#3)
25
# - [3.1 - Create the Sequential Model](#3-1)
26
# - [Exercise 1 - happyModel](#ex-1)
27
# - [3.2 - Train and Evaluate the Model](#3-2)
28
# - [4 - The Functional API](#4)
29
# - [4.1 - Load the SIGNS Dataset](#4-1)
30
# - [4.2 - Split the Data into Train/Test Sets](#4-2)
31
# - [4.3 - Forward Propagation](#4-3)
32
# - [Exercise 2 - convolutional_model](#ex-2)
33
# - [4.4 - Train the Model](#4-4)
34
# - [5 - History Object](#5)
35
# - [6 - Bibliography](#6)
36
37
# <a name='1'></a>
38
# ## 1 - Packages
39
#
40
# As usual, begin by loading in the packages.
41
42
# In[4]:
43
44
45
import math
46
import numpy as np
47
import h5py
48
import matplotlib.pyplot as plt
49
from matplotlib.pyplot import imread
50
import scipy
51
from PIL import Image
52
import pandas as pd
53
import tensorflow as tf
54
import tensorflow.keras.layers as tfl
55
from tensorflow.python.framework import ops
56
from cnn_utils import *
57
from test_utils import summary, comparator
58
59
get_ipython().run_line_magic('matplotlib', 'inline')
60
np.random.seed(1)
61
62
63
# <a name='1-1'></a>
64
# ### 1.1 - Load the Data and Split the Data into Train/Test Sets
65
#
66
# You'll be using the Happy House dataset for this part of the assignment, which contains images of peoples' faces. Your task will be to build a ConvNet that determines whether the people in the images are smiling or not -- because they only get to enter the house if they're smiling!
67
68
# In[5]:
69
70
71
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_happy_dataset()
72
73
# Normalize image vectors
74
X_train = X_train_orig/255.
75
X_test = X_test_orig/255.
76
77
# Reshape
78
Y_train = Y_train_orig.T
79
Y_test = Y_test_orig.T
80
81
print ("number of training examples = " + str(X_train.shape[0]))
82
print ("number of test examples = " + str(X_test.shape[0]))
83
print ("X_train shape: " + str(X_train.shape))
84
print ("Y_train shape: " + str(Y_train.shape))
85
print ("X_test shape: " + str(X_test.shape))
86
print ("Y_test shape: " + str(Y_test.shape))
87
88
89
# You can display the images contained in the dataset. Images are **64x64** pixels in RGB format (3 channels).
90
91
# In[6]:
92
93
94
index = 124
95
plt.imshow(X_train_orig[index]) #display sample training image
96
plt.show()
97
98
99
# <a name='2'></a>
100
# ## 2 - Layers in TF Keras
101
#
102
# In the previous assignment, you created layers manually in numpy. In TF Keras, you don't have to write code directly to create layers. Rather, TF Keras has pre-defined layers you can use.
103
#
104
# When you create a layer in TF Keras, you are creating a function that takes some input and transforms it into an output you can reuse later. Nice and easy!
105
106
# <a name='3'></a>
107
# ## 3 - The Sequential API
108
#
109
# In the previous assignment, you built helper functions using `numpy` to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. Keras is a high-level abstraction built on top of TensorFlow, which allows for even more simplified and optimized model creation and training.
110
#
111
# For the first part of this assignment, you'll create a model using TF Keras' Sequential API, which allows you to build layer by layer, and is ideal for building models where each layer has **exactly one** input tensor and **one** output tensor.
112
#
113
# As you'll see, using the Sequential API is simple and straightforward, but is only appropriate for simpler, more straightforward tasks. Later in this notebook you'll spend some time building with a more flexible, powerful alternative: the Functional API.
114
#
115
116
# <a name='3-1'></a>
117
# ### 3.1 - Create the Sequential Model
118
#
119
# As mentioned earlier, the TensorFlow Keras Sequential API can be used to build simple models with layer operations that proceed in a sequential order.
120
#
121
# You can also add layers incrementally to a Sequential model with the `.add()` method, or remove them using the `.pop()` method, much like you would in a regular Python list.
122
#
123
# Actually, you can think of a Sequential model as behaving like a list of layers. Like Python lists, Sequential layers are ordered, and the order in which they are specified matters. If your model is non-linear or contains layers with multiple inputs or outputs, a Sequential model wouldn't be the right choice!
124
#
125
# For any layer construction in Keras, you'll need to specify the input shape in advance. This is because in Keras, the shape of the weights is based on the shape of the inputs. The weights are only created when the model first sees some input data. Sequential models can be created by passing a list of layers to the Sequential constructor, like you will do in the next assignment.
126
#
127
# <a name='ex-1'></a>
128
# ### Exercise 1 - happyModel
129
#
130
# Implement the `happyModel` function below to build the following model: `ZEROPAD2D -> CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> FLATTEN -> DENSE`. Take help from [tf.keras.layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers)
131
#
132
# Also, plug in the following parameters for all the steps:
133
#
134
# - [ZeroPadding2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ZeroPadding2D): padding 3, input shape 64 x 64 x 3
135
# - [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D): Use 32 7x7 filters, stride 1
136
# - [BatchNormalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization): for axis 3
137
# - [ReLU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU)
138
# - [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D): Using default parameters
139
# - [Flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) the previous output.
140
# - Fully-connected ([Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer: Apply a fully connected layer with 1 neuron and a sigmoid activation.
141
#
142
#
143
# **Hint:**
144
#
145
# Use **tfl** as shorthand for **tensorflow.keras.layers**
146
147
# In[102]:
148
149
150
# GRADED FUNCTION: happyModel
151
152
def happyModel():
153
"""
154
Implements the forward propagation for the binary classification model:
155
ZEROPAD2D -> CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> FLATTEN -> DENSE
156
157
Note that for simplicity and grading purposes, you'll hard-code all the values
158
such as the stride and kernel (filter) sizes.
159
Normally, functions should take these values as function parameters.
160
161
Arguments:
162
None
163
164
Returns:
165
model -- TF Keras model (object containing the information for the entire training process)
166
"""
167
model = tf.keras.Sequential([
168
## ZeroPadding2D with padding 3, input shape of 64 x 64 x 3
169
170
## Conv2D with 32 7x7 filters and stride of 1
171
172
## BatchNormalization for axis 3
173
174
## ReLU
175
176
## Max Pooling 2D with default parameters
177
178
## Flatten layer
179
180
## Dense layer with 1 unit for output & 'sigmoid' activation
181
182
# YOUR CODE STARTS HERE
183
184
185
tf.keras.layers.ZeroPadding2D(padding=(3,3),input_shape=(64, 64, 3), data_format="channels_last"),
186
187
tf.keras.layers.Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0'),
188
189
190
tf.keras.layers.BatchNormalization(axis = 3, name = 'bn0'),
191
192
tf.keras.layers.ReLU(
193
max_value=None, negative_slope=0.0, threshold=0.0
194
),
195
196
tf.keras.layers.MaxPooling2D((2, 2), name='max_pool0'),
197
198
tf.keras.layers.Flatten(),
199
200
tf.keras.layers.Dense(1, activation='sigmoid', name='fc'),
201
202
# YOUR CODE ENDS HERE
203
])
204
205
return model
206
207
208
# In[103]:
209
210
211
happy_model = happyModel()
212
# Print a summary for each layer
213
for layer in summary(happy_model):
214
print(layer)
215
216
output = [['ZeroPadding2D', (None, 70, 70, 3), 0, ((3, 3), (3, 3))],
217
['Conv2D', (None, 64, 64, 32), 4736, 'valid', 'linear', 'GlorotUniform'],
218
['BatchNormalization', (None, 64, 64, 32), 128],
219
['ReLU', (None, 64, 64, 32), 0],
220
['MaxPooling2D', (None, 32, 32, 32), 0, (2, 2), (2, 2), 'valid'],
221
['Flatten', (None, 32768), 0],
222
['Dense', (None, 1), 32769, 'sigmoid']]
223
224
comparator(summary(happy_model), output)
225
226
227
# Now that your model is created, you can compile it for training with an optimizer and loss of your choice. When the string `accuracy` is specified as a metric, the type of accuracy used will be automatically converted based on the loss function used. This is one of the many optimizations built into TensorFlow that make your life easier! If you'd like to read more on how the compiler operates, check the docs [here](https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile).
228
229
# In[77]:
230
231
232
happy_model.compile(optimizer='adam',
233
loss='binary_crossentropy',
234
metrics=['accuracy'])
235
236
237
# It's time to check your model's parameters with the `.summary()` method. This will display the types of layers you have, the shape of the outputs, and how many parameters are in each layer.
238
239
# In[78]:
240
241
242
happy_model.summary()
243
244
245
# <a name='3-2'></a>
246
# ### 3.2 - Train and Evaluate the Model
247
#
248
# After creating the model, compiling it with your choice of optimizer and loss function, and doing a sanity check on its contents, you are now ready to build!
249
#
250
# Simply call `.fit()` to train. That's it! No need for mini-batching, saving, or complex backpropagation computations. That's all been done for you, as you're using a TensorFlow dataset with the batches specified already. You do have the option to specify epoch number or minibatch size if you like (for example, in the case of an un-batched dataset).
251
252
# In[ ]:
253
254
255
happy_model.fit(X_train, Y_train, epochs=10, batch_size=16)
256
257
258
# After that completes, just use `.evaluate()` to evaluate against your test set. This function will print the value of the loss function and the performance metrics specified during the compilation of the model. In this case, the `binary_crossentropy` and the `accuracy` respectively.
259
260
# In[ ]:
261
262
263
happy_model.evaluate(X_test, Y_test)
264
265
266
# Easy, right? But what if you need to build a model with shared layers, branches, or multiple inputs and outputs? This is where Sequential, with its beautifully simple yet limited functionality, won't be able to help you.
267
#
268
# Next up: Enter the Functional API, your slightly more complex, highly flexible friend.
269
270
# <a name='4'></a>
271
# ## 4 - The Functional API
272
273
# Welcome to the second half of the assignment, where you'll use Keras' flexible [Functional API](https://www.tensorflow.org/guide/keras/functional) to build a ConvNet that can differentiate between 6 sign language digits.
274
#
275
# The Functional API can handle models with non-linear topology, shared layers, as well as layers with multiple inputs or outputs. Imagine that, where the Sequential API requires the model to move in a linear fashion through its layers, the Functional API allows much more flexibility. Where Sequential is a straight line, a Functional model is a graph, where the nodes of the layers can connect in many more ways than one.
276
#
277
# In the visual example below, the one possible direction of the movement Sequential model is shown in contrast to a skip connection, which is just one of the many ways a Functional model can be constructed. A skip connection, as you might have guessed, skips some layer in the network and feeds the output to a later layer in the network. Don't worry, you'll be spending more time with skip connections very soon!
278
279
# <img src="images/seq_vs_func.png" style="width:350px;height:200px;">
280
281
# <a name='4-1'></a>
282
# ### 4.1 - Load the SIGNS Dataset
283
#
284
# As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
285
286
# In[79]:
287
288
289
# Loading the data (signs)
290
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_signs_dataset()
291
292
293
# <img src="images/SIGNS.png" style="width:800px;height:300px;">
294
#
295
# The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples.
296
297
# In[80]:
298
299
300
# Example of an image from the dataset
301
index = 9
302
plt.imshow(X_train_orig[index])
303
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
304
305
306
# <a name='4-2'></a>
307
# ### 4.2 - Split the Data into Train/Test Sets
308
#
309
# In Course 2, you built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
310
#
311
# To get started, let's examine the shapes of your data.
312
313
# In[81]:
314
315
316
X_train = X_train_orig/255.
317
X_test = X_test_orig/255.
318
Y_train = convert_to_one_hot(Y_train_orig, 6).T
319
Y_test = convert_to_one_hot(Y_test_orig, 6).T
320
print ("number of training examples = " + str(X_train.shape[0]))
321
print ("number of test examples = " + str(X_test.shape[0]))
322
print ("X_train shape: " + str(X_train.shape))
323
print ("Y_train shape: " + str(Y_train.shape))
324
print ("X_test shape: " + str(X_test.shape))
325
print ("Y_test shape: " + str(Y_test.shape))
326
327
328
# <a name='4-3'></a>
329
# ### 4.3 - Forward Propagation
330
#
331
# In TensorFlow, there are built-in functions that implement the convolution steps for you. By now, you should be familiar with how TensorFlow builds computational graphs. In the [Functional API](https://www.tensorflow.org/guide/keras/functional), you create a graph of layers. This is what allows such great flexibility.
332
#
333
# However, the following model could also be defined using the Sequential API since the information flow is on a single line. But don't deviate. What we want you to learn is to use the functional API.
334
#
335
# Begin building your graph of layers by creating an input node that functions as a callable object:
336
#
337
# - **input_img = tf.keras.Input(shape=input_shape):**
338
#
339
# Then, create a new node in the graph of layers by calling a layer on the `input_img` object:
340
#
341
# - **tf.keras.layers.Conv2D(filters= ... , kernel_size= ... , padding='same')(input_img):** Read the full documentation on [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D).
342
#
343
# - **tf.keras.layers.MaxPool2D(pool_size=(f, f), strides=(s, s), padding='same'):** `MaxPool2D()` downsamples your input using a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, you usually operate on a single example at a time and a single channel at a time. Read the full documentation on [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D).
344
#
345
# - **tf.keras.layers.ReLU():** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [ReLU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU).
346
#
347
# - **tf.keras.layers.Flatten()**: given a tensor "P", this function takes each training (or test) example in the batch and flattens it into a 1D vector.
348
#
349
# * If a tensor P has the shape (batch_size,h,w,c), it returns a flattened tensor with shape (batch_size, k), where $k=h \times w \times c$. "k" equals the product of all the dimension sizes other than the first dimension.
350
#
351
# * For example, given a tensor with dimensions [100, 2, 3, 4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [Flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten).
352
#
353
# - **tf.keras.layers.Dense(units= ... , activation='softmax')(F):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense).
354
#
355
# In the last function above (`tf.keras.layers.Dense()`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.
356
#
357
# Lastly, before creating the model, you'll need to define the output using the last of the function's compositions (in this example, a Dense layer):
358
#
359
# - **outputs = tf.keras.layers.Dense(units=6, activation='softmax')(F)**
360
#
361
#
362
# #### Window, kernel, filter, pool
363
#
364
# The words "kernel" and "filter" are used to refer to the same thing. The word "filter" accounts for the amount of "kernels" that will be used in a single convolution layer. "Pool" is the name of the operation that takes the max or average value of the kernels.
365
#
366
# This is why the parameter `pool_size` refers to `kernel_size`, and you use `(f,f)` to refer to the filter size.
367
#
368
# Pool size and kernel size refer to the same thing in different objects - They refer to the shape of the window where the operation takes place.
369
370
# <a name='ex-2'></a>
371
# ### Exercise 2 - convolutional_model
372
#
373
# Implement the `convolutional_model` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> DENSE`. Use the functions above!
374
#
375
# Also, plug in the following parameters for all the steps:
376
#
377
# - [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D): Use 8 4 by 4 filters, stride 1, padding is "SAME"
378
# - [ReLU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU)
379
# - [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D): Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME"
380
# - **Conv2D**: Use 16 2 by 2 filters, stride 1, padding is "SAME"
381
# - **ReLU**
382
# - **MaxPool2D**: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME"
383
# - [Flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) the previous output.
384
# - Fully-connected ([Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer: Apply a fully connected layer with 6 neurons and a softmax activation.
385
386
# In[143]:
387
388
389
# GRADED FUNCTION: convolutional_model
390
391
def convolutional_model(input_shape):
392
"""
393
Implements the forward propagation for the model:
394
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> DENSE
395
396
Note that for simplicity and grading purposes, you'll hard-code some values
397
such as the stride and kernel (filter) sizes.
398
Normally, functions should take these values as function parameters.
399
400
Arguments:
401
input_img -- input dataset, of shape (input_shape)
402
403
Returns:
404
model -- TF Keras model (object containing the information for the entire training process)
405
"""
406
407
input_img = tf.keras.Input(shape=input_shape)
408
## CONV2D: 8 filters 4x4, stride of 1, padding 'SAME'
409
# Z1 = None
410
## RELU
411
# A1 = None
412
## MAXPOOL: window 8x8, stride 8, padding 'SAME'
413
# P1 = None
414
## CONV2D: 16 filters 2x2, stride 1, padding 'SAME'
415
# Z2 = None
416
## RELU
417
# A2 = None
418
## MAXPOOL: window 4x4, stride 4, padding 'SAME'
419
# P2 = None
420
## FLATTEN
421
# F = None
422
## Dense layer
423
## 6 neurons in output layer. Hint: one of the arguments should be "activation='softmax'"
424
# outputs = None
425
# YOUR CODE STARTS HERE
426
427
Z1 = tf.keras.layers.Conv2D(filters = 8 , kernel_size= (4,4), strides = (1,1), padding='same')(input_img)
428
A1 = tf.keras.layers.ReLU()(Z1)
429
P1 = tf.keras.layers.MaxPool2D(pool_size=(8,8), strides=(8, 8), padding='same')(A1)
430
Z2 = tf.keras.layers.Conv2D(filters = 16 , kernel_size= (2,2), strides = (1,1), padding='same')(P1)
431
A2 = tf.keras.layers.ReLU()(Z2)
432
P2 = tf.keras.layers.MaxPool2D(pool_size=(4,4), strides=(4, 4), padding='same')(A2)
433
F = tf.keras.layers.Flatten()(P2)
434
outputs = tf.keras.layers.Dense(units=6, activation='softmax')(F)
435
436
# YOUR CODE ENDS HERE
437
model = tf.keras.Model(inputs=input_img, outputs=outputs)
438
return model
439
440
441
# In[144]:
442
443
444
conv_model = convolutional_model((64, 64, 3))
445
conv_model.compile(optimizer='adam',
446
loss='categorical_crossentropy',
447
metrics=['accuracy'])
448
conv_model.summary()
449
450
output = [['InputLayer', [(None, 64, 64, 3)], 0],
451
['Conv2D', (None, 64, 64, 8), 392, 'same', 'linear', 'GlorotUniform'],
452
['ReLU', (None, 64, 64, 8), 0],
453
['MaxPooling2D', (None, 8, 8, 8), 0, (8, 8), (8, 8), 'same'],
454
['Conv2D', (None, 8, 8, 16), 528, 'same', 'linear', 'GlorotUniform'],
455
['ReLU', (None, 8, 8, 16), 0],
456
['MaxPooling2D', (None, 2, 2, 16), 0, (4, 4), (4, 4), 'same'],
457
['Flatten', (None, 64), 0],
458
['Dense', (None, 6), 390, 'softmax']]
459
460
comparator(summary(conv_model), output)
461
462
463
# Both the Sequential and Functional APIs return a TF Keras model object. The only difference is how inputs are handled inside the object model!
464
465
# <a name='4-4'></a>
466
# ### 4.4 - Train the Model
467
468
# In[ ]:
469
470
471
train_dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train)).batch(64)
472
test_dataset = tf.data.Dataset.from_tensor_slices((X_test, Y_test)).batch(64)
473
history = conv_model.fit(train_dataset, epochs=100, validation_data=test_dataset)
474
475
476
# <a name='5'></a>
477
# ## 5 - History Object
478
#
479
# The history object is an output of the `.fit()` operation, and provides a record of all the loss and metric values in memory. It's stored as a dictionary that you can retrieve at `history.history`:
480
481
# In[ ]:
482
483
484
history.history
485
486
487
# Now visualize the loss over time using `history.history`:
488
489
# In[ ]:
490
491
492
# The history.history["loss"] entry is a dictionary with as many values as epochs that the
493
# model was trained on.
494
df_loss_acc = pd.DataFrame(history.history)
495
df_loss= df_loss_acc[['loss','val_loss']]
496
df_loss.rename(columns={'loss':'train','val_loss':'validation'},inplace=True)
497
df_acc= df_loss_acc[['accuracy','val_accuracy']]
498
df_acc.rename(columns={'accuracy':'train','val_accuracy':'validation'},inplace=True)
499
df_loss.plot(title='Model loss',figsize=(12,8)).set(xlabel='Epoch',ylabel='Loss')
500
df_acc.plot(title='Model Accuracy',figsize=(12,8)).set(xlabel='Epoch',ylabel='Accuracy')
501
502
503
# **Congratulations**! You've finished the assignment and built two models: One that recognizes smiles, and another that recognizes SIGN language with almost 80% accuracy on the test set. In addition to that, you now also understand the applications of two Keras APIs: Sequential and Functional. Nicely done!
504
#
505
# By now, you know a bit about how the Functional API works and may have glimpsed the possibilities. In your next assignment, you'll really get a feel for its power when you get the opportunity to build a very deep ConvNet, using ResNets!
506
507
# <a name='6'></a>
508
# ## 6 - Bibliography
509
#
510
# You're always encouraged to read the official documentation. To that end, you can find the docs for the Sequential and Functional APIs here:
511
#
512
# https://www.tensorflow.org/guide/keras/sequential_model
513
#
514
# https://www.tensorflow.org/guide/keras/functional
515
516