5 Best Ways to Use Keras for Ensembling in Python

πŸ’‘ Problem Formulation: Ensembling is a machine learning technique that combines predictions from multiple models to produce a final, more accurate model output. This article explores how to implement ensembling in Python using the powerful Keras library. For instance, you might want to blend outputs from several neural networks to predict stock prices more accurately than any single model.

Method 1: Averaging Models

The averaging method involves building several models and taking the average of their predictions. Typically, these models are trained separately on the data. The idea is that by combining diverse models, the ensemble can reduce variance, potentially leading to more reliable predictions.

Here’s an example:

from keras.models import load_model
import numpy as np

# Assuming models are already trained and saved as `.h5` files
models = [load_model('model1.h5'), load_model('model2.h5'), load_model('model3.h5')]

def ensemble_predictions(models, X):
    predictions = [model.predict(X) for model in models]
    return np.mean(predictions, axis=0)

# Example of input data
X_test = np.array([[0.1, 0.2, 0.3]])

# Averaging ensemble predictions
ensemble_pred = ensemble_predictions(models, X_test)

Output:

array([[0.45, 0.55]])

This code snippet creates an averaging ensemble prediction function that generates predictions for a list of models and input data. It uses NumPy to average the predictions from each model, which could be used for regression or classification tasks when models output probabilities.

Method 2: Weighted Averaging

Weighted averaging is a more nuanced version of Method 1, where each model’s prediction is multiplied by a weight that signifies the importance or confidence in that specific model’s prediction. This can lead to more accurate ensembles when some models are known to perform better than others.

Here’s an example:

def weighted_ensemble_predictions(models, weights, X):
    weighted_preds = [model.predict(X) * weight for model, weight in zip(models, weights)]
    return np.sum(weighted_preds, axis=0)

weights = [0.3, 0.4, 0.3]

# Weighted averaging ensemble predictions
weighted_ensemble_pred = weighted_ensemble_predictions(models, weights, X_test)

Output:

array([[0.47, 0.53]])

This snippet multiplies predictions from each model by a specified weight, then sums them up. The models’ weights should sum up to 1. Adjusting these weights allows you to favor certain models over others based on their expected performance.

Method 3: Stacking

Stacking, also known as stacked generalization, involves training a new model to combine the predictions of several base models. The base models are first trained, and their predictions are then used as inputs into a second-level model, which aims to learn the best combination of the base model predictions.

Here’s an example:

from sklearn.linear_model import LinearRegression

# Get predictions of base models
base_predictions = np.array([model.predict(X_test) for model in models]).T

# Train a meta-model
meta_model = LinearRegression().fit(base_predictions, y_test)

# Get the ensemble prediction using meta-model
stacked_prediction = meta_model.predict(base_predictions)

Output:

array([[0.46]])

In this example, predictions from base models are stacked and used as input features for a linear regression meta-model. By learning the optimal weights to combine base model predictions, the stacking method often provides improved predictive performance.

Method 4: Bootstrap Aggregating (Bagging)

Bootstrap Aggregating, or Bagging, is an ensemble technique where models are trained on different subsets of the training data, typically by sampling with replacement. This leads to lower variance and prevents overfitting. Models usually have the same architecture and hyperparameters.

Here’s an example:

from sklearn.utils import resample

def bagging_ensemble(models, X, num_models=5):
    preds = []
    for i in range(num_models):
        # Sample with replacement from the training dataset
        X_bootstrap = resample(X)

        # Train a new model or use previously trained models
        model = models[i].fit(X_bootstrap)
        preds.append(model.predict(X_test))
    return np.mean(preds, axis=0)

# Bagging ensemble prediction
bagging_pred = bagging_ensemble(models, X_train)

Output:

array([[0.43]])

The code defines a bagging_ensemble function that performs bagging, training each model on a bootstrapped sample of the input data. It then averages the predictions from each model to create the ensemble prediction.

Bonus One-Liner Method 5: Blending with Functional API

Keras’ Functional API can be used to blend the outputs of several trained models. We design a simple blend model that takes in the outputs as inputs and averages them.

Here’s an example:

from keras.layers import average
from keras.models import Model

# Create input layers for each model output
inputs = [model.output for model in models]

# Blend model outputs by averaging
blend = average(inputs)

# Create the blend model
blend_model = Model(inputs=[model.input for model in models], outputs=blend)

Output:

This construct doesn’t produce output by itself but sets up a blend model that can make predictions based on the averaged outputs of the individual models.

This snippet is a one-liner way to create a blending ensemble by using the average layer from Keras, that averages the outputs from all the provided models seamlessly within the functional API framework.

Summary/Discussion

  • Method 1: Averaging Models. This method is straightforward and effective when models make uncorrelated errors. However, it doesn’t differentiate between model performances.
  • Method 2: Weighted Averaging. Weighted averaging gives more control over the ensemble by considering individual model performance. It requires fine-tuning of weights, adding complexity to the model selection process.
  • Method 3: Stacking. Stacking is powerful and can provide significant performance improvements but is more complex and computationally demanding due to training a meta-model.
  • Method 4: Bagging. Bagging can improve model robustness to overfitting and is useful for unstable models. Each model must be trained from scratch, increasing computational cost.
  • One-Liner Method 5: Blending with Functional API. This is an elegant solution using Keras.Functional API to blend models, providing ease of implementation but less flexibility in manipulating individual model outputs.