Skip to main content

Installing ART

The ART client can be installed into projects designed to run on any machine that runs python.
pip install openpipe-art

Running the server locally

The ART server can be run locally on any machine with a GPU. To install the backend dependencies required for training and inference, you can install the backend extra:
pip install openpipe-art[backend]
from art import TrainableModel, gather_trajectory_groups
from art.local.backend import LocalBackend

backend = LocalBackend()

model = TrainableModel(
    name="agent-001",
    project="my-agentic-task",
    base_model="OpenPipe/Qwen3-14B-Instruct",
)

await model.register(backend)

... the rest of your code ...

Using a managed autoscaling backend

Instead of managing the GPUs and training processes yourself, you can optionally send inference and training requests to the W&B Training cluster, which autoscales to match your job’s demand. To do so, install openpipe-art without any extras and use ServerlessBackend:
pip install openpipe-art
from art import TrainableModel, gather_trajectory_groups
from art.serverless.backend import ServerlessBackend

backend = ServerlessBackend()

model = TrainableModel(
    name="agent-001",
    project="my-agentic-task",
    base_model="OpenPipe/Qwen3-14B-Instruct",
)

await model.register(backend)

... the rest of your code ...
To learn more about the ART client and server, see the docs below.