5 Best Ways to Export Your TensorFlow Model Using Python

Rate this post

πŸ’‘ Problem Formulation: After training a model with TensorFlow, a common requirement is to make the model available for use in other environments, such as production systems or other development platforms. The goal is to export a TensorFlow model so that it can be easily loaded and used elsewhere. An example of input would be a trained TensorFlow model in Python, and the desired output would be a saved model format suitable for deployment or sharing.

Method 1: Save and Load with TensorFlow’s SavedModel Format

The SavedModel format in TensorFlow is a universal format for saving trained models and can be used across various platforms. It contains a complete TensorFlow program, including weights and computation. The SavedModel format is ideal for serving models with TensorFlow Serving or using them in a different environment from where they were trained.

Here’s an example:

import tensorflow as tf

# Assume 'model' is our trained TensorFlow model.
model = ...

# Save the model to the SavedModel format.

# Load the model from the SavedModel format.
loaded_model = tf.keras.models.load_model('/tmp/saved_model/')

Output: Model saved to ‘/tmp/saved_model/’ and loaded successfully.

This code demonstrates how to save a trained TensorFlow model using model.save() and subsequently load the model using tf.keras.models.load_model(). It shows the full path to a saved folder where the model is stored. Once the model is saved, it can be loaded back into a new TensorFlow session.

Method 2: Export as HDF5 File

The Hierarchical Data Format version 5 (HDF5) is another method for saving TensorFlow models. It stores the architecture, weights, and training configuration of the model in a single file. HDF5 is particularly convenient for models with custom layers or custom training loops.

Here’s an example:

import tensorflow as tf

model = ...

# Save the entire model as an HDF5 file.

# Recreate the exact same model, including weights and optimizer.
loaded_model = tf.keras.models.load_model('my_model.h5')

Output: Model saved to ‘my_model.h5’ and loaded successfully.

The example shows a TensorFlow model being saved as an HDF5 file using model.save('my_model.h5'). The method is straightforward and useful for models that require the encapsulation of the full state, including the optimizer state, in a single file, which can then be loaded with tf.keras.models.load_model().

Method 3: Saving Weights Only

Sometimes models are exported by only saving their weights, which can be done using the TensorFlow save_weights() function. This method is useful when you only need to preserve the learned parameters and not the entire model architecture.

Here’s an example:

import tensorflow as tf

model = ...

# Save only the weights of the model.

# Assuming model architecture is recreated

# Load the previously saved weights.

Output: Weights saved to ‘/tmp/my_model_weights’ and loaded successfully.

The code snippet illustrates how to save only the weights of a model and load them back. It will be necessary to have the same architecture already defined in the environment where the weights are going to be loaded, using model.load_weights().

Method 4: Using TensorFlow Lite Converter

TensorFlow Lite is a set of tools that helps developers convert their TensorFlow models into a format suitable for deployment on mobile and embedded devices. The TensorFlow Lite Converter will convert a TensorFlow model into TensorFlow Lite’s flat buffer format.

Here’s an example:

import tensorflow as tf

model = ...

# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

# Save the model to disk
with open('model.tflite', 'wb') as f:

Output: Model saved as ‘model.tflite’.

The provided snippet takes a Keras model, converts it to the TensorFlow Lite format using TFLiteConverter.from_keras_model(), and then saves the converted model to a file. This method is crucial for deploying models on mobile devices or IoT gadgets.

Bonus One-Liner Method 5: Quick Save Model Weights with one-liner

In a hurry? Here’s how you can save your TensorFlow model’s weights with a quick one-liner.

Here’s an example:


Output: Weights saved as ‘my_model_weights.h5’.

For a quick save when you just need the model weights and nothing else, use model.save_weights('my_model_weights.h5'). This method requires you to define the model architecture before you can load these weights.


  • Method 1: SavedModel Format. Best for cross-platform compatibility. Can be larger in size since it saves the entire model.
  • Method 2: HDF5 File. Ideal for models with more complexity, such as custom layers. The entire model state, including optimizer, is retained.
  • Method 3: Weights Only. Useful for saving storage space. Requires redefining the model when loading again.
  • Method 4: TensorFlow Lite Converter. Essential for mobile and embedded devices due to the optimized model size and performance.
  • Method 5: Quick Save Weights. Convenient one-liner to save weights only. Simple but requires model architecture to load the weights later.