Path: blob/master/examples/vision/md/attention_mil_classification.md
3508 views
Classification using Attention-based Deep Multiple Instance Learning (MIL).
Author: Mohamad Jaber
Date created: 2021/08/16
Last modified: 2021/11/25
Description: MIL approach to classify bags of instances and get their individual instance score.
Introduction
What is Multiple Instance Learning (MIL)?
Usually, with supervised learning algorithms, the learner receives labels for a set of instances. In the case of MIL, the learner receives labels for a set of bags, each of which contains a set of instances. The bag is labeled positive if it contains at least one positive instance, and negative if it does not contain any.
Motivation
It is often assumed in image classification tasks that each image clearly represents a class label. In medical imaging (e.g. computational pathology, etc.) an entire image is represented by a single class label (cancerous/non-cancerous) or a region of interest could be given. However, one will be interested in knowing which patterns in the image is actually causing it to belong to that class. In this context, the image(s) will be divided and the subimages will form the bag of instances.
Therefore, the goals are to:
Learn a model to predict a class label for a bag of instances.
Find out which instances within the bag caused a position class label prediction.
Implementation
The following steps describe how the model works:
The feature extractor layers extract feature embeddings.
The embeddings are fed into the MIL attention layer to get the attention scores. The layer is designed as permutation-invariant.
Input features and their corresponding attention scores are multiplied together.
The resulting output is passed to a softmax function for classification.
References
Some of the attention operator code implementation was inspired from https://github.com/utayao/Atten_Deep_MIL.
Imbalanced data tutorial by TensorFlow.
Setup
Create dataset
We will create a set of bags and assign their labels according to their contents. If at least one positive instance is available in a bag, the bag is considered as a positive bag. If it does not contain any positive instance, the bag will be considered as negative.
Configuration parameters
POSITIVE_CLASS
: The desired class to be kept in the positive bag.BAG_COUNT
: The number of training bags.VAL_BAG_COUNT
: The number of validation bags.BAG_SIZE
: The number of instances in a bag.PLOT_SIZE
: The number of bags to plot.ENSEMBLE_AVG_COUNT
: The number of models to create and average together. (Optional: often results in better performance - set to 1 for single model)
Prepare bags
Since the attention operator is a permutation-invariant operator, an instance with a positive class label is randomly placed among the instances in the positive bag.
Visualizer tool
Plot the number of bags (given by PLOT_SIZE
) with respect to the class.
Moreover, if activated, the class label prediction with its associated instance score for each bag (after the model has been trained) can be seen.
Create model
First we will create some embeddings per instance, invoke the attention operator and then use the softmax function to output the class probabilities.
Class weights
Since this kind of problem could simply turn into imbalanced data classification problem, class weighting should be considered.
Let's say there are 1000 bags. There often could be cases were ~90 % of the bags do not contain any positive label and ~10 % do. Such data can be referred to as Imbalanced data.
Using class weights, the model will tend to give a higher weight to the rare class.
Build and train model
The model is built and trained in this section.
Model: "functional_1"
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ Connected to ┃ ┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩ │ input_layer │ (None, 28, 28) │ 0 │ - │ │ (InputLayer) │ │ │ │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ input_layer_1 │ (None, 28, 28) │ 0 │ - │ │ (InputLayer) │ │ │ │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ input_layer_2 │ (None, 28, 28) │ 0 │ - │ │ (InputLayer) │ │ │ │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ flatten (Flatten) │ (None, 784) │ 0 │ input_layer[0][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ flatten_1 (Flatten) │ (None, 784) │ 0 │ input_layer_1[0][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ flatten_2 (Flatten) │ (None, 784) │ 0 │ input_layer_2[0][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ dense (Dense) │ (None, 128) │ 100,480 │ flatten[0][0], │ │ │ │ │ flatten_1[0][0], │ │ │ │ │ flatten_2[0][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ dense_1 (Dense) │ (None, 64) │ 8,256 │ dense[0][0], │ │ │ │ │ dense[1][0], │ │ │ │ │ dense[2][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ alpha │ [(None, 1), │ 33,024 │ dense_1[0][0], │ │ (MILAttentionLayer) │ (None, 1), (None, │ │ dense_1[1][0], │ │ │ 1)] │ │ dense_1[2][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ multiply (Multiply) │ (None, 64) │ 0 │ alpha[0][0], │ │ │ │ │ dense_1[0][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ multiply_1 │ (None, 64) │ 0 │ alpha[0][1], │ │ (Multiply) │ │ │ dense_1[1][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ multiply_2 │ (None, 64) │ 0 │ alpha[0][2], │ │ (Multiply) │ │ │ dense_1[2][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ concatenate │ (None, 192) │ 0 │ multiply[0][0], │ │ (Concatenate) │ │ │ multiply_1[0][0], │ │ │ │ │ multiply_2[0][0] │ ├─────────────────────┼───────────────────┼─────────┼──────────────────────┤ │ dense_2 (Dense) │ (None, 2) │ 386 │ concatenate[0][0] │ └─────────────────────┴───────────────────┴─────────┴──────────────────────┘
Total params: 142,146 (555.26 KB)
Trainable params: 142,146 (555.26 KB)
Non-trainable params: 0 (0.00 B)
100%|██████████████████████████████████████████████████████████████████████████████████| 1/1 [00:36<00:00, 36.67s/it]