Audio Classification with the STFTSpectrogram layer
Author: Mostafa M. Amin
Date created: 2024/10/04
Last modified: 2024/10/04
Description: Introducing the STFTSpectrogram
layer to extract spectrograms for audio classification.
Introduction
Preprocessing audio as spectrograms is an essential step in the vast majority of audio-based applications. Spectrograms represent the frequency content of a signal over time, are widely used for this purpose. In this tutorial, we'll demonstrate how to use the STFTSpectrogram
layer in Keras to convert raw audio waveforms into spectrograms within the model. We'll then feed these spectrograms into an LSTM network followed by Dense layers to perform audio classification on the Speech Commands dataset.
We will:
Load the ESC-10 dataset.
Preprocess the raw audio waveforms and generate spectrograms using
STFTSpectrogram
.Build two models, one using spectrograms as 1D signals and the other is using as images (2D signals) with a pretrained image model.
Train and evaluate the models.
Setup
Importing the necessary libraries
Define some variables
Download and Preprocess the ESC-10 Dataset
We'll use the Dataset for Environmental Sound Classification dataset (ESC-10). This dataset consists of five-second .wav files of environmental sounds.
Download and Extract the dataset
Read the CSV file
Define functions to read and preprocess the WAV files
Create a function that uses the STFTSpectrogram
to compute a spectrogram, then plots it.
Create a function that uses the STFTSpectrogram
to compute three spectrograms with multiple bandwidths, then aligns them as an image with different channels, to get a multi-bandwith spectrogram, then plots the spectrogram.
Demonstrate a sample wav file.
Plot a Spectrogram
Plot a multi-bandwidth spectrogram
Define functions to construct a TF Dataset
Create the datasets
Training the Models
In this tutorial we demonstrate the different usecases of the STFTSpectrogram
layer.
The first model will use a non-trainable STFTSpectrogram
layer, so it is intended purely for preprocessing. Additionally, the model will use 1D signals, hence it make use of Conv1D layers.
The second model will use a trainable STFTSpectrogram
layer with the expand_dims
option, which expands the shapes to be compatible with image models.
Create the 1D model
Create a non-trainable spectrograms, extracting a 1D time signal.
Apply
Conv1D
layers withLayerNormalization
simialar to the classic VGG design.Apply global maximum pooling to have fixed set of features.
Add
Dense
layers to make the final predictions based on the features.
Train the model and restore the best weights.
Create the 2D model
Create three spectrograms with multiple band-widths from the raw input.
Concatenate the three spectrograms to have three channels.
Load
MobileNet
and set the weights from the weights trained onImageNet
.Apply global maximum pooling to have fixed set of features.
Add
Dense
layers to make the final predictions based on the features.
Train the model and restore the best weights.
Plot Training History
Evaluate on Test Data
Running the models on the test set.