The ART server can be run locally on any machine with a GPU. To install the backend dependencies required for training and inference, you can install the backend extra:
Copy
Ask AI
pip install openpipe-art[backend]
Copy
Ask AI
from art import TrainableModel, gather_trajectory_groupsfrom art.local.backend import LocalBackendbackend = LocalBackend()model = TrainableModel( name="agent-001", project="my-agentic-task", base_model="Qwen/Qwen2.5-14B-Instruct",)await model.register(backend)... the rest of your code ...
Running the client locally and connecting to a remote server
The ART client can also be run locally and connected to a remote server, which ART will automatically provision for you. To use SkyPilot, you’ll need to install the optional dependency:
Copy
Ask AI
pip install openpipe-art[skypilot]
Then you can use SkyPilotBackend in your code:
Copy
Ask AI
from art import TrainableModel, gather_trajectory_groupsfrom art.skypilot.backend import SkyPilotBackendbackend = await SkyPilotBackend.initialize_cluster( cluster_name="my-cluster", gpu="H100")model = TrainableModel( name="agent-001", project="my-agentic-task", base_model="Qwen/Qwen2.5-14B-Instruct",)await model.register(backend)... the rest of your code ...
To learn more about the ART client and server, see the docs below.