Skip to main content
ART divides the logic for training an agent into two distinct abstractions. The client is responsible for interfacing with the environment in which the agent runs and for sending inference and training requests to the backend. The backend is responsible for generating tokens at inference time, updating the agent’s weights based on past performance, and managing GPU memory as it switches from inference to training mode. This separation of concerns simplifies the process of teaching an agent to improve its performance using RL. While the backend’s training and inference settings are highly configurable, they’re also set up to use intelligent defaults that save beginners time while getting started. However, there are a few important considerations to take before running your first training job.

Managed, remote, or local training

ART provides three backend classes:
  • ServerlessBackend - train remotely on autoscaling GPUs
  • SkyPilotBackend - train remotely on self-managed infra
  • LocalBackend - run your agent and training code on the same machine
If your agent is already set up on a machine equipped with an advanced GPU and you want to run training on the same machine, use LocalBackend. If your agent is running on a machine without an advanced GPU (this includes most personal computers and production servers), use SkyPilotBackend or ServerlessBackend instead. ServerlessBackend optimizes speed and cost by autoscaling across managed clusters. SkyPilotBackend lets you use your own infra. All three backend types implement the art.Backend class and the client interacts with all three in the exact same way. Under the hood, SkyPilotBackend configures a remote machine equipped with a GPU to run LocalBackend, and forwards requests from the client to the remote instance. ServerlessBackend runs within W&B Training clusters and autoscales GPUs to meet training and inference demand.

ServerlessBackend

Setting up ServerlessBackend requires a W&B API key. Once you have one, you can provide it to ServerlessBackend either as an environment variable or initialization argument.
from art.serverless.backend import ServerlessBackend

backend = ServerlessBackend(
  api_key="my-api-key",
  # or set WANDB_API_KEY in the environment
)
As your training job progresses, ServerlessBackend automatically saves your LoRA checkpoints as W&B Artifacts and deploys them for production inference on W&B Inference.

SkyPilotBackend

To use SkyPilotBackend, you’ll need to install the optional dependency:
pip install openpipe-art[skypilot]
When a SkyPilotBackend instance is initialized, it does a few things:
  • Provisions a remote machine with an advanced GPU (by default on RunPod)
  • Installs openpipe-art and its dependencies
  • Initializes a LocalBackend instance with vLLM and a training server (unsloth or torchtune)
  • Registers the LocalBackend instance to forward requests to it over http
To initialize a SkyPilotBackend instance, follow the code sample below:
from art.skypilot import SkyPilotBackend

backend = await SkyPilotBackend.initialize_cluster(
    # name of the cluster in SkyPilot's registry
    cluster_name="my-cluster",
    # version of openpipe-art that should be installed on the remote cluster
    # default to version installed on the client
    art_version="0.3.12",
    # path to environment variables (e.g. WANDB_API_KEY) to set on the remote cluster
    env_path=".env",
    # the GPU the cluster is equipped with
    gpu="H100"
    # alternatively, more complicated requirements can be specified in
    # the `resources` argument
)
When a training job is finished, you can shut down a cluster either through code or the CLI. Code:
backend = await SkyPilotBackend.initialize_cluster(...)

# ...training code...

backend.down()
CLI:
uv run sky down my-cluster

LocalBackend

The LocalBackend class runs a vLLM server and either an Unsloth or torchtune instance on whatever machine your agent itself is executing. This is a good fit if you’re already running your agent on a machine with a GPU. To declare a LocalBackend instance, follow the code sample below:
from art.local import LocalBackend

backend = LocalBackend(
    # set to True if you want your backend to shut down automatically
    # when your client process ends
    in_process: False,
    # local path where the backend will store trajectory logs and model weights
    path: './.art',
)

Using a backend

Once initialized, a backend can be used in the same way regardless of whether it runs locally or remotely.
BACKEND_TYPE = "serverless"

if BACKEND_TYPE == "serverless":
    from art.serverless.backend import ServerlessBackend
    backend = await ServerlessBackend()
else if BACKEND_TYPE="remote":
    from art.skypilot import SkyPilotBackend
    backend = await SkyPilotBackend.initialize_cluster(
        cluster_name="my-cluster",
        gpu="H100"
    )
else:
    from art.local import LocalBackend
    backend = LocalBackend()

model = art.TrainableModel(...)

await model.register(backend)

# ...training code...
To see LocalBackend and ServerlessBackend in action, try the examples below.
I