Path: blob/master/examples/keras_recipes/ipynb/tf_serving.ipynb
3508 views
Serving TensorFlow models with TFServing
Author: Dimitre Oliveira
Date created: 2023/01/02
Last modified: 2023/01/02
Description: How to serve TensorFlow models with TensorFlow Serving.
Introduction
Once you build a machine learning model, the next step is to serve it. You may want to do that by exposing your model as an endpoint service. There are many frameworks that you can use to do that, but the TensorFlow ecosystem has its own solution called TensorFlow Serving.
From the TensorFlow Serving GitHub page:
TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. It deals with the inference aspect of machine learning, taking models after training and managing their lifetimes, providing clients with versioned access via a high-performance, reference-counted lookup table. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data."
To note a few features:
It can serve multiple models, or multiple versions of the same model simultaneously
It exposes both gRPC as well as HTTP inference endpoints
It allows deployment of new model versions without changing any client code
It supports canarying new versions and A/B testing experimental models
It adds minimal latency to inference time due to efficient, low-overhead implementation
It features a scheduler that groups individual inference requests into batches for joint execution on GPU, with configurable latency controls
It supports many servables: Tensorflow models, embeddings, vocabularies, feature transformations and even non-Tensorflow-based machine learning models
This guide creates a simple MobileNet model using the Keras applications API, and then serves it with TensorFlow Serving. The focus is on TensorFlow Serving, rather than the modeling and training in TensorFlow.
Note: you can find a Colab notebook with the full working code at this link.
Dependencies
Model
Here we load a pre-trained MobileNet from the Keras applications, this is the model that we are going to serve.
Preprocessing
Most models don't work out of the box on raw data, they usually require some kind of preprocessing step to adjust the data to the model requirements, in the case of this MobileNet we can see from its API page that it requires three basic steps for its input images:
Pixel values normalized to the
[0, 1]
rangePixel values scaled to the
[-1, 1]
rangeImages with the shape of
(224, 224, 3)
meaning(height, width, channels)
We can do all of that with the following function:
A note regarding preprocessing and postprocessing using the "keras.applications" API
All models that are available at the Keras applications API also provide preprocess_input
and decode_predictions
functions, those functions are respectively responsible for the preprocessing and postprocessing of each model, and already contains all the logic necessary for those steps. That is the recommended way to process inputs and outputs when using Keras applications models. For this guide, we are not using them to present the advantages of custom signatures in a clearer way.
Postprocessing
In the same context most models output values that need extra processing to meet the user requirements, for instance, the user does not want to know the logits values for each class given an image, what the user wants is to know from which class it belongs. For our model, this translates to the following transformations on top of the model outputs:
Get the index of the class with the highest prediction
Get the name of the class from that index
Now let's download a banana picture and see how everything comes together.
Save the model
To load our trained model into TensorFlow Serving, we first need to save it in SavedModel format. This will create a protobuf file in a well-defined directory hierarchy, and will include a version number. TensorFlow Serving allows us to select which version of a model, or "servable" we want to use when we make inference requests. Each version will be exported to a different sub-directory under the given path.
Examine your saved model
We'll use the command line utility saved_model_cli
to look at the MetaGraphDefs (the models) and SignatureDefs (the methods you can call) in our SavedModel. See this discussion of the SavedModel CLI in the TensorFlow Guide.
That tells us a lot about our model! For instance, we can see that its inputs have a 4D shape (-1, 224, 224, 3)
which means (batch_size, height, width, channels)
, also note that this model requires a specific image shape (224, 224, 3)
this means that we may need to reshape our images before sending them to the model. We can also see that the model's outputs have a (-1, 1000)
shape which are the logits for the 1000 classes of the ImageNet dataset.
This information doesn't tell us everything, like the fact that the pixel values needs to be in the [-1, 1]
range, but it's a great start.
Serve your model with TensorFlow Serving
Install TFServing
We're preparing to install TensorFlow Serving using Aptitude since this Colab runs in a Debian environment. We'll add the tensorflow-model-server
package to the list of packages that Aptitude knows about. Note that we're running as root.
Note: This example is running TensorFlow Serving natively, but you can also run it in a Docker container, which is one of the easiest ways to get started using TensorFlow Serving.
Start running TensorFlow Serving
This is where we start running TensorFlow Serving and load our model. After it loads, we can start making inference requests using REST. There are some important parameters:
port
: The port that you'll use for gRPC requests.rest_api_port
: The port that you'll use for REST requests.model_name
: You'll use this in the URL of REST requests. It can be anything.model_base_path
: This is the path to the directory where you've saved your model.
Check the TFServing API reference to get all the parameters available.
outputs:
outputs:
Make a request to your model in TensorFlow Serving
Now let's create the JSON object for an inference request, and see how well our model classifies it:
REST API
Newest version of the servable
We'll send a predict request as a POST to our server's REST endpoint, and pass it as an example. We'll ask our server to give us the latest version of our servable by not specifying a particular version.
outputs:
gRPC API
gRPC is based on the Remote Procedure Call (RPC) model and is a technology for implementing RPC APIs that uses HTTP 2.0 as its underlying transport protocol. gRPC is usually preferred for low-latency, highly scalable, and distributed systems. If you wanna know more about the REST vs gRPC tradeoffs, checkout this article.
outputs:
Custom signature
Note that for this model we always need to preprocess and postprocess all samples to get the desired output, this can get quite tricky if are maintaining and serving several models developed by a large team, and each one of them might require different processing logic.
TensorFlow allows us to customize the model graph to embed all of that processing logic, which makes model serving much easier, there are different ways to achieve this, but since we are going to server the models using TFServing we can customize the model graph straight into the serving signature.
We can just use the following code to export the same model that already contains the preprocessing and postprocessing logic as the default signature, this allows this model to make predictions on raw data.
Note that this model has a different signature, its input is still 4D but now with a (-1, -1, -1, 3)
shape, which means that it supports images with any height and width size. Its output also has a different shape, it no longer outputs the 1000-long logits.
We can test the model's prediction using a specific signature using this API below:
Prediction using a particular version of the servable
Now let's specify a particular version of our servable. Note that when we saved the model with a custom signature we used a different folder, the first model was saved in folder /1
(version 1), and the one with a custom signature in folder /2
(version 2). By default, TFServing will serve all models that share the same base parent folder.
REST API
outputs: