Skip to main content

Serverless Models

We currently only support the following model for serverless training. We are actively adding support for both larger and smaller models. If there’s a particular model you’d like to see serverless support for, please send a request to [email protected].
  • OpenPipe Qwen 3 14B Instruct
    • Good balance of performance and size. Has support for tool calling and generally trains well. This is our recommended model for users new to RL.
If you’re developing locally or in your own hardware, here are a couple other models you could try in addition to the recommended serverless list.

More Models

ART has wide support for models supported by vLLM. However, not all models support all features. For instance, if a model’s chat template does not include tool call support, you won’t be able to use tools with it natively. And if a model’s architecture doesn’t have support for LoRA layers, it won’t be compatible with our LoRA-based backend, but still may work with our full-fine-tuning backend. Here are additional models that we’ve tested and found to work well with ART: If you’re curious about a model that is not listed above, ask in the Discord #support channel.