Managed or local training
ART provides two backend classes:ServerlessBackend- train remotely on autoscaling GPUsLocalBackend- run your agent and training code on the same machine
LocalBackend. If your agent is running on a machine without an advanced GPU (this includes most personal computers and production servers), use ServerlessBackend instead. ServerlessBackend optimizes speed and cost by autoscaling across managed clusters.
ServerlessBackend
Setting upServerlessBackend requires a W&B API key. Once you have one, you can provide it to ServerlessBackend either as an environment variable or initialization argument.
ServerlessBackend automatically saves your LoRA checkpoints as W&B Artifacts and deploys them for production inference on W&B Inference.
LocalBackend
TheLocalBackend class runs a vLLM server and either an Unsloth or torchtune instance on whatever machine your agent itself is executing. This is a good fit if you’re already running your agent on a machine with a GPU.
To declare a LocalBackend instance, follow the code sample below:
Using a backend
Once initialized, a backend can be used in the same way regardless of whether it runs locally or remotely.LocalBackend and ServerlessBackend in action, try the examples below.