A custom .pth ML model deployment and inference

0

I have my new dataset as CSV format in Amazon S3 bucket and my custom ml model as .pth file in the SAME Amazon S3 bucket. Both are present in a Sagemaker notebook instance, now I want to infer the result that will be obtained from taking the dataset for evaluation on the model in the notebook and return the result at last.

can you please provide detailed steps to follow .........

GGK
asked a month ago157 views
2 Answers
0

There are multiple ways to achieve this, but downloading and loading the model on the notebook instance itself is probably the least operationally scalable option.

I would suggest to package/import your existing pth file as a SageMaker Model, which will unlock several options for deployment & management. Assuming your model is less than ~1B parameters (in which case you'd probably be better off looking at SageMaker's dedicated Large Model Inference containers), you can use the default PyTorch container with model.tar.gz as described here.

  • Create a folder (on your notebook instance or wherever)
  • Copy your pth into it with name model.pth (the name is important)
  • IF your model requires extra dependencies beyond PyTorch basics (already in the default container image), you can create a code/ subfolder and also add code/requirements.txt to install them at run-time
  • IF your model requires specific inference pre- or post-processing logic, you can add a code/inference.py with functions as described here - but probably not necessary if you already have your model as .pth and your inputs/outputs will just be CSV or JSON or NumPy: As the base container already handles these.
  • Compress your folder contents to a tarball taking care that the outputs are at the root level of the archive: I.e. tar -czf ../model.tar.gz . from inside your folder where model.pth is: NOT tar -czf model.tar.gz my-model-folder.
  • Upload your packaged tarball to S3 somewhere.

Once you have a model.tar.gz containing model.pth at the top level and optional inference.py / requirements.txt under a code/ subfolder, you're ready to use it in SageMaker.

This example notebook shows how to deploy your model to a real-time endpoint using the SageMaker Python SDK (import sagemaker, available on PyPI but pre-installed on SM notebooks). If you don't need an inference.py or it's already packaged in the model.tar.gz you can skip the entry_point and source_dir arguments for PyTorchModel.

However, for your use-case a batch transform job may be more appropriate than a real-time endpoint deployment. A real-time endpoint provides online serving but is chargeable until you turn it off. With SageMaker Batch Transform, SageMaker spins up on-demand instance(s) for you to run the batch inference and then automatically shuts them down as soon as the inference is done.

Once you have your PyTorchModel, instead of calling .deploy() you can call .transformer(...) to configure a batch job - full Python Transformer API here.

I don't know your exact setup but could suggest some likely parameters...

In .transformer():

  • strategy='SingleRecord' if you want SageMaker to send your CSV records to the model one-by-one, or MultiRecord if your model is capable of processing parallel batch requests. If you go with MultiRecord, you'll probably want to set max_payload to something small like 1 or 2 (Megabytes).
  • accept="text/csv" (Assuming you want CSV output files)
  • assemble_with="Line" (Assuming you want CSV output files)

In your resultant transformer.transform(...):

  • content_type="text/csv" (Assuming you have CSV input files on S3)
  • split_type="Line" (In a CSV, each newline-separated section is a separate record)
  • join_source="Input" if you also want your output files to contain the input features... Otherwise they'll just be the model output, in the same order as the input records were.

Most Batch Transform example notebooks also train the model (which you don't need to do in your case). This notebook might be useful but it uses image data rather than CSV. Maybe you're using a tabular PyTorch model like PyTorch-TabNet? This notebook might be relevant but is quite outdated now.

AWS
EXPERT
Alex_T
answered a month ago
-1

To deploy a custom PyTorch model (.pth file) and perform inference using a new dataset in CSV format within a SageMaker notebook instance, follow these detailed steps:

Step 1: Set Up Your SageMaker Notebook Instance

  1. Launch a SageMaker Notebook Instance:
  • Go to the AWS Management Console.
  • Navigate to the SageMaker service.
  • Click on "Notebook instances" and then "Create notebook instance."
  • Configure your notebook instance as required and make sure it has access to your S3 bucket.
  1. Install Required Libraries:
  • Once your notebook instance is running, open a new notebook and install the necessary libraries if they are not already installed.
!pip install boto3 pandas torch

Step 2: Load Data and Model from S3

  1. Load the Dataset:
  • Use boto3 to download the CSV dataset from S3 to the notebook instance and load it into a Pandas DataFrame.
import boto3
import pandas as pd

s3 = boto3.client('s3')
bucket_name = 'your-s3-bucket-name'
dataset_key = 'path/to/your-dataset.csv'

s3.download_file(bucket_name, dataset_key, 'dataset.csv')
dataset = pd.read_csv('dataset.csv')
  1. Load the Model:
  • Similarly, download the model file (.pth) from S3.
model_key = 'path/to/your-model.pth'
s3.download_file(bucket_name, model_key, 'model.pth')

Step 3: Define the Model Architecture and Load Weights

  1. Define the Model Architecture:
  • Define the same architecture that was used to train the model.
import torch
import torch.nn as nn

class CustomModel(nn.Module):
    def __init__(self):
        super(CustomModel, self).__init__()
        # Define your layers here
        self.layer1 = nn.Linear(input_size, hidden_size)
        self.layer2 = nn.Linear(hidden_size, output_size)
        # Add more layers as required

    def forward(self, x):
        x = torch.relu(self.layer1(x))
        x = self.layer2(x)
        # Add more layers as required
        return x

model = CustomModel()
model.load_state_dict(torch.load('model.pth'))
model.eval()

Step 4: Preprocess the Data

  1. Preprocess the Dataset:
  • Ensure the dataset is in the correct format for the model.
# Example preprocessing - modify according to your dataset
def preprocess(data):
    # Convert the DataFrame to numpy array or torch tensor
    # Perform any scaling, normalization, etc.
    return torch.tensor(data.values, dtype=torch.float32)

input_data = preprocess(dataset)

Step 5: Perform Inference

  1. Perform Inference:
  • Use the model to perform inference on the preprocessed data.
with torch.no_grad():
    predictions = model(input_data)
  1. Post-process the Results:
  • Convert the predictions to a readable format (e.g., class labels if it’s a classification problem).
# Example post-processing - modify according to your model output
predicted_classes = predictions.argmax(dim=1)

Step 6: Save the Results to S3

  1. Save the Results:
  • Save the inference results to a CSV file and upload it back to the S3 bucket.
results_df = pd.DataFrame(predicted_classes.numpy(), columns=['Predicted'])
results_df.to_csv('results.csv', index=False)

s3.upload_file('results.csv', bucket_name, 'path/to/save/results.csv')

Complete Example

Here’s the complete example script in one place:

import boto3
import pandas as pd
import torch
import torch.nn as nn

# Define the model architecture
class CustomModel(nn.Module):
    def __init__(self):
        super(CustomModel, self).__init__()
        # Define your layers here
        self.layer1 = nn.Linear(input_size, hidden_size)
        self.layer2 = nn.Linear(hidden_size, output_size)
        # Add more layers as required

    def forward(self, x):
        x = torch.relu(self.layer1(x))
        x = self.layer2(x)
        # Add more layers as required
        return x

# Initialize S3 client
s3 = boto3.client('s3')
bucket_name = 'your-s3-bucket-name'
dataset_key = 'path/to/your-dataset.csv'
model_key = 'path/to/your-model.pth'

# Download dataset and model from S3
s3.download_file(bucket_name, dataset_key, 'dataset.csv')
s3.download_file(bucket_name, model_key, 'model.pth')

# Load dataset into a DataFrame
dataset = pd.read_csv('dataset.csv')

# Initialize and load the model
model = CustomModel()
model.load_state_dict(torch.load('model.pth'))
model.eval()

# Preprocess the dataset
def preprocess(data):
    return torch.tensor(data.values, dtype=torch.float32)

input_data = preprocess(dataset)

# Perform inference
with torch.no_grad():
    predictions = model(input_data)

# Post-process the results
predicted_classes = predictions.argmax(dim=1)

# Save the results to a CSV file
results_df = pd.DataFrame(predicted_classes.numpy(), columns=['Predicted'])
results_df.to_csv('results.csv', index=False)

# Upload the results back to S3
s3.upload_file('results.csv', bucket_name, 'path/to/save/results.csv')

By following these steps, you can deploy your custom PyTorch model, perform inference on a new dataset, and handle the data within a SageMaker notebook instance.

profile picture
EXPERT
answered a month ago
EXPERT
reviewed a month ago