Path: blob/master/guides/md/int8_quantization_in_keras.md
4299 views
8-bit Integer Quantization in Keras
Author: Jyotinder Singh
Date created: 2025/10/14
Last modified: 2025/10/14
Description: Complete guide to using INT8 quantization in Keras and KerasHub.
What is INT8 quantization?
Quantization lowers the numerical precision of weights and activations to reduce memory use and often speed up inference, at the cost of a small accuracy drop. Moving from float32 to float16 halves the memory requirements; float32 to INT8 is ~4x smaller (and ~2x vs float16). On hardware with low-precision kernels (e.g., NVIDIA Tensor Cores), this can also improve throughput and latency. Actual gains depend on your backend and device.
How it works
Quantization maps real values to 8-bit integers with a scale:
Integer domain:
[-128, 127](256 levels).For a tensor (often per-output-channel for weights) with values
w:Compute
a_max = max(abs(w)).Set scale
s = (2 * a_max) / 256.Quantize:
q = clip(round(w / s), -128, 127)(stored as INT8) and keeps.
Inference uses
qandsto reconstruct effective weights on the fly (w ≈ s · q) or foldssinto the matmul/conv for efficiency.
Benefits
Memory / bandwidth bound models: When implementation spends most of its time on memory I/O, reducing the computation time does not reduce their overall runtime. INT8 reduces bytes moved by ~4x vs
float32, improving cache behavior and reducing memory stalls; this often helps more than increasing raw FLOPs.Compute bound layers on supported hardware: On NVIDIA GPUs, INT8 Tensor Cores speed up matmul/conv, boosting throughput on compute-limited layers.
Accuracy: Many models retain near-FP accuracy with
float16; INT8 may introduce a modest drop (often ~1-5% depending on task/model/data). Always validate on your own dataset.
What Keras does in INT8 mode
Mapping: Symmetric, linear quantization with INT8 plus a floating-point scale.
Weights: per-output-channel scales to preserve accuracy.
Activations: dynamic AbsMax scaling computed at runtime.
Graph rewrite: Quantization is applied after weights are trained and built; the graph is rewritten so you can run or save immediately.
Overview
This guide shows how to use 8-bit integer post-training quantization (PTQ) in Keras:
Quantizing a minimal functional model.
We build a small functional model, capture a baseline output, quantize to INT8 in-place, and then compare outputs with an MSE metric.
It is evident that the INT8 quantized model produces outputs close to the original FP32 model, as indicated by the low MSE value.
Saving and reloading a quantized model
You can use the standard Keras saving and loading APIs with quantized models. Quantization is preserved when saving to .keras and loading back.
Quantizing a KerasHub model
All KerasHub models support the .quantize(...) API for post-training quantization, and follow the same workflow as above.
In this example, we will:
Load the gemma3_1b preset from KerasHub
Generate text using both the full-precision and quantized models, and compare outputs.
Save both models to disk and compute storage savings.
Reload the INT8 model and verify output consistency with the original quantized model.
Practical tips
Post-training quantization (PTQ) is a one-time operation; you cannot train a model after quantizing it to INT8.
Always materialize weights before quantization (e.g.,
build()or a forward pass).Expect small numerical deltas; quantify with a metric like MSE on a validation batch.
Storage savings are immediate; speedups depend on backend/device kernels.