Getting Started with Ray Serve for High‑Performance Model Serving

Written By:
Founder & CTO
June 24, 2025

Serving machine learning models efficiently at scale has become one of the biggest challenges in the MLOps lifecycle. As machine learning workloads grow increasingly complex, with multiple models, asynchronous workflows, real-time inference requirements, and hybrid cloud deployment scenarios, there’s an urgent need for a serving solution that is scalable, flexible, and developer-friendly.

This is where Ray Serve shines. Built on top of the Ray ecosystem, Ray Serve is a high-performance, Python-native model serving framework that simplifies the process of turning ML models into production APIs. Whether you’re deploying a single model, a chain of models, or a multi-tenant ML service architecture, Ray Serve offers a scalable, unified, and composable solution to help developers serve machine learning models effortlessly.

In this detailed guide, we’ll walk through:

  • What Ray Serve is and why it matters

  • Its key features that make it ideal for production model serving

  • How it compares to traditional serving platforms

  • Hands-on steps to build and deploy a model

  • Performance optimization best practices

  • How to scale from local deployment to Kubernetes

  • Real-world use cases showing Ray Serve in action

Whether you're a machine learning engineer, backend developer, or MLOps practitioner, this is your ultimate guide to getting started with Ray Serve for high-performance model serving.

Why Ray Serve?
A Python-First Model Serving Framework

One of Ray Serve’s most compelling features is that it is designed entirely in Python. You don’t have to write separate model server code in C++ or rely on Java wrappers. Instead, you simply define a Python class, decorate it with @serve.deployment, and deploy it. Everything from orchestration to autoscaling is handled behind the scenes.

This makes it incredibly easy to get started, especially for data scientists and ML engineers who are already familiar with Python. With just a few lines of code, you can deploy a model and expose it as a REST endpoint or gRPC service.

Unified Serving Layer for Any ML Framework

Ray Serve is framework-agnostic, which means you can deploy models written in:

  • PyTorch

  • TensorFlow

  • Scikit-Learn

  • XGBoost

  • HuggingFace Transformers

  • Or even custom models built with NumPy or custom logic

You’re not locked into a specific ecosystem. This is particularly beneficial in real-world environments where teams often work with heterogeneous stacks. Ray Serve supports them all, unifying them into a consistent deployment and management interface.

Composable Model Workflows

Modern machine learning systems rarely involve just one model. More often than not, they are pipelines consisting of multiple models or services that need to work in tandem.

For example, you might have:

  • A preprocessing service to clean input

  • A feature extractor model

  • A main prediction model

  • A post-processing step to format the output

Ray Serve allows you to compose these individual components into a unified service. Each one can be deployed as a separate Ray Serve deployment, and you can chain them using deployment handles, all with minimal code overhead.

Runs Everywhere: Local, Cloud, Kubernetes

Ray Serve can be deployed in any environment, locally on your laptop, on cloud VMs, or on Kubernetes clusters using KubeRay. For large-scale enterprise deployments, you can also use Anyscale, the managed platform for Ray.

This makes Ray Serve uniquely flexible: you can prototype locally and scale seamlessly to cloud environments without changing your codebase. It’s built for portability.

Built-in Autoscaling, Batching, and Load Balancing

At its core, Ray Serve is an actor-based serving framework, which means each deployment is backed by a Ray actor that can be scaled horizontally.

You can:

  • Automatically scale replicas up or down

  • Use microbatching to combine requests for GPU efficiency

  • Distribute traffic using internal load balancers

These built-in capabilities make Ray Serve not only developer-friendly but also production-ready out of the box.

Quickstart: Deploying a Simple HuggingFace Model
Step 1 – Installation and Setup

To get started, install Ray Serve and the necessary libraries:

pip install "ray[serve]" transformers torch requests

Start a Ray cluster locally:

import ray

ray.init()

Then import the necessary modules:

from ray import serve

from transformers import pipeline

from starlette.requests import Request

Step 2 – Define Your Deployment

Here’s an example of a simple translation service using a HuggingFace t5-small model:

@serve.deployment(num_replicas=2, ray_actor_options={"num_cpus": 0.2, "num_gpus": 0})

class Translator:

    def __init__(self):

        self.model = pipeline("translation_en_to_fr",model="t5-small")

    async def __call__(self, request: Request):

        data = await request.json()

        text = data.get("text", "")

        result = self.model(text)

        return result[0]["translation_text"]

Step 3 – Start the Serve Cluster and Deploy

serve.start()

Translator.deploy()

Your model is now running at http://localhost:8000/Translator. You can send a POST request with a JSON payload to get translated results in real-time.

This end-to-end deployment takes fewer than 20 lines of code. That’s the power of Ray Serve.

Deep Dive into Key Ray Serve Features
Native Autoscaling for Deployments

Ray Serve allows you to configure autoscaling for each deployment by setting num_replicas="auto" and defining an autoscaling_config.

You can specify:

  • Min and max replicas

  • Target number of in-flight requests

  • Scale-down cooldown period

This makes your service elastic, scaling up under load and scaling down during idle times, without manual intervention.

Efficient Microbatching for GPU Optimization

Ray Serve includes built-in support for request batching, which is especially useful for deep learning models that perform better with batched inputs.

By using @serve.batch, you can enable this mode and configure:

  • Maximum batch size

  • Batch wait timeout (in seconds)

This ensures you can handle high throughput with low latency, especially for GPU-bound workloads.

Fractional GPU and CPU Support

With Ray Serve, you can assign fractional GPU or CPU resources to each deployment. For instance, you can run four lightweight models on a single GPU by assigning each deployment num_gpus=0.25.

This increases resource utilization and makes it possible to run multiple models on the same node.

Composable Multi-Model Pipelines

As mentioned earlier, Ray Serve makes it easy to chain multiple models together into a single pipeline.

You can use deployment handles like this:

handle = serve.get_deployment("MyOtherModel").get_handle(sync=True)

result = await handle.remote(input_data)

Each service can be scaled independently, making your pipeline modular, maintainable, and scalable.

Built-In Monitoring and Observability

Ray Serve exposes detailed metrics and traces to help monitor service health:

  • Processing latency

  • Request queue length

  • Replica status

  • Error counts

You can integrate these metrics with Prometheus, Grafana, or use Ray Dashboard for a visual overview.

Advantages Over Traditional Model Serving Tools
Compared to TensorFlow Serving or TorchServe
  • Python-native: No need to write configuration files or protobuf schemas.

  • Composable pipelines: Ray Serve lets you chain PyTorch and TensorFlow models in the same application.

  • Autoscaling and batching: These features are either absent or limited in traditional tools.

Compared to Cloud ML Serving Platforms
  • No vendor lock-in: You can move between AWS, GCP, or on-prem with no code change.

  • Custom deployment logic: Full control over batch sizes, replica behavior, and routing.

  • Local-to-cloud continuity: Develop on your laptop, scale on the cloud.

Compared to Kubernetes-Based Serving
  • No Kubernetes expertise required: Ray Serve doesn’t need Kubernetes to work.

  • Faster local prototyping: Deployments take seconds locally.

  • Optional KubeRay support: For advanced production workloads.

Performance Optimization Best Practices
Smart Resource Allocation
  • Assign only the CPU or GPU required per replica.

  • Use Ray’s actor-level ray_actor_options to define resource usage.

Enable Asynchronous Methods

Always write async methods (async def __call__) to support concurrent request handling within each replica.

Profile Batch Sizes

Test different max_batch_size and batch_wait_timeout_s values to find the best tradeoff between latency and throughput.

Monitor, Log, and Adjust

Use Ray Dashboard or Prometheus to monitor real-time metrics. Use this data to adjust replica counts or batch sizes as needed.

Production Deployment Strategies
Deploying on VMs or Bare Metal

You can use serve.run() or serve build and serve deploy to run your applications directly on virtual machines or dedicated servers.

Running on Kubernetes with KubeRay

For enterprise-grade deployments, KubeRay allows you to run Ray Serve on Kubernetes. It provides:

  • Resilient orchestration

  • Autoscaling clusters

  • High availability

Use RayService custom resource definitions (CRDs) to define production-grade serving clusters.

Managed Deployment with Anyscale

Anyscale is the enterprise Ray platform that abstracts away infrastructure. It provides features like:

  • Managed Ray clusters

  • Auto-scaling and failover

  • GPU-aware scheduling

  • Real-time observability and debugging tools

Real-World Use Cases of Ray Serve
Klaviyo's Real-Time ML System

Klaviyo uses Ray Serve to run its real-time inference system for customer segmentation and recommendations. With Ray Serve, they built a fully dynamic runtime that supports multi-tenant model execution with sub-second latency.

LLM Deployment and Text Pipelines

Several startups use Ray Serve to host LLM-based services (e.g., summarization, code generation, translation) where latency and composability are critical. Ray Serve helps them chain multiple models while handling burst traffic using autoscaling.

Computer Vision Inference at Scale

Ray Serve is used in high-resolution image processing tasks, such as object detection and medical image segmentation, where fractional GPU usage and batching are vital to keep inference times low and cost predictable.

Final Thoughts

Ray Serve is more than just a model server. It’s a developer-friendly, Python-first system for building, scaling, and maintaining machine learning services at production scale.

Its composability, built-in autoscaling, support for batching and GPU optimization, and ability to work across environments make it a perfect choice for any ML engineer looking to scale their model serving architecture without the overhead of managing multiple systems.

Whether you're building a small project or a massive enterprise-grade ML platform, Ray Serve enables rapid iteration, seamless scaling, and total flexibility.