Machine Learning Model Deployment: From Jupyter Notebook to the Cloud
Machine learning has become an essential tool for businesses and organizations to make data-driven decisions. However, building a machine learning model is only half the battle. Deploying the model into a production environment where it can be used to make predictions is equally important. In this article, we will explore the steps involved in deploying a machine learning model from a Jupyter Notebook to the cloud.
Developing the Model
The first step in deploying a machine learning model is to develop the model itself. This is typically done in a Jupyter Notebook, where data scientists can test various modeling strategies and fine-tune the model's parameters. Once the model is developed and tested, it's time to move on to the next step.
Migrating the Code
The second step in deploying a machine learning model is to migrate the code from the Jupyter Notebook into executable modules. The idea here is to have a way to automate the whole model building and prediction process, which cannot be done on Jupyter. This involves creating preprocess, train, and inference Python scripts.
# Example of a preprocess script using scikit-learn pipelines
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
preprocess_pipeline = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
preprocessed_data = preprocess_pipeline.fit_transform(data)
# Example of a train script using scikit-learn pipelines
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
train_pipeline = Pipeline([
('preprocess', preprocess_pipeline),
('classifier', RandomForestClassifier())
])
trained_model = train_pipeline.fit(X_train, y_train)
# Example of an inference script
def predict(model, data):
# Model prediction code here
preprocessed_data = preprocess_pipeline.transform(data)
prediction = model.predict(preprocessed_data)
return prediction
Saving the Model Pipelines
It is important to save the model pipelines after they have been trained. This is because the pipelines contain the preprocessing steps and the trained model, which are required for making predictions on new data. Saving the pipelines also allows for easy reusability of the model in other applications.
# Example of saving the trained model pipeline
import joblib
joblib.dump(train_pipeline, 'trained_model_pipeline.joblib')
# Example of loading the trained model pipeline
import joblib
trained_model_pipeline = joblib.load('trained_model_pipeline.joblib')
Building an API
The third step in deploying a machine learning model is to build an API that will take real-time inputs, usually in the form of JSON. This API could be user-facing, such as a mobile or web app, or an internal API that connects to another application to fetch the data.
# Example of a Flask API endpoint
@app.route('/predict', methods=['POST'])
def predict():
data = request.get_json()
prediction = predict(trained_model, data)
return jsonify(prediction.tolist())
Building a Model Prediction Endpoint
The fourth step in deploying a machine learning model is to build a model prediction endpoint on the API that invokes the model's prediction API, something like the common .predict()
or .generate()
methods. The way the prediction works is when the input hits the /predict
API (prediction endpoint), the inputs are passed to the model, and the API retrieves the model prediction (also usually in the form of JSON) like this: {id: 0001, prediction_score: 55}
.
# Example of a model prediction endpoint
class ModelPredictionEndpoint:
def __init__(self, model):
self.model = model
def predict(self, data):
preprocessed_data = preprocess_pipeline.transform(data)
prediction = self.model.predict(preprocessed_data)
return prediction.tolist()
Wrapping the API
The fifth step in deploying a machine learning model is to wrap the API using a container service like Docker so that the code is agnostic to the environment. This ensures that the API can be deployed on any platform without any issues.
# Example of a Dockerfile
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]
Docker and Kubernetes
Docker and Kubernetes are two popular tools used for deploying machine learning models to the cloud. Docker is a containerization platform that allows developers to package their applications and dependencies into a single container that can be run on any platform. Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.
Using Docker and Kubernetes can simplify the deployment process and make it more scalable. Instead of deploying the application on a single server, Docker and Kubernetes allow for the deployment of multiple instances of the application, which can be scaled up or down based on demand.
# Example of a Dockerfile for a machine learning model
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]
# Example of a Kubernetes deployment file for a machine learning model
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-registry/my-app:latest
ports:
- containerPort: 5000
Deploying to the Cloud
With the model API container, you can now push this same API to any cloud provider and their service offerings: AWS EC2, Sagemaker, Beanstalk, GCP Vertex AI, App Engine. Here's an example of deploying the Docker container on AWS Elastic Beanstalk:
- Create an Elastic Beanstalk environment with a Docker platform.
- Upload the Docker image to a container registry like Docker Hub or Amazon ECR.
- Configure the Elastic Beanstalk environment to use the Docker image.
- Deploy the application to the Elastic Beanstalk environment.
# Example of deploying the Docker container on AWS Elastic Beanstalk
# Build the Docker image
docker build -t my-app .
# Tag the Docker image
docker tag my-app:latest my-registry/my-app:latest
# Push the Docker image to the container registry
docker push my-registry/my-app:latest
# Create an Elastic Beanstalk environment with a Docker platform
eb create my-environment --platform "Docker 20.10.7"
# Configure the Elastic Beanstalk environment to use the Docker image
eb setenv DOCKER_IMAGE=my-registry/my-app:latest
# Deploy the application to the Elastic Beanstalk environment
eb deploy
Conclusion
In conclusion, deploying a machine learning model to the cloud involves several steps, including developing the model, migrating the code, building an API, building a model prediction endpoint, wrapping the API using a container service like Docker, and deploying to the cloud. Saving the model pipelines and using Docker and Kubernetes can simplify the deployment process and make it more scalable.