GPU Fleet Management
Provision and manage GPU instances across Vast.ai, RunPod, Lambda, and HuggingFace Spaces from one dashboard.
Provision GPU instances across Vast.ai, RunPod, Lambda, and HuggingFace. Run training jobs, manage storage, and deploy AI coding agents — all from one platform.
$ train launch --provider vastai --gpu "RTX 4090" --count 2
Searching for 2x RTX 4090 on Vast.ai...
Found offer: $0.32/hr, US-West, 99.8% reliability
Instance created: i-a7f3b2c1
Installing trainsh-agent...
Agent connected. Host "gpu-4090-west" is online.
$ _
Provision and manage GPU instances across Vast.ai, RunPod, Lambda, and HuggingFace Spaces from one dashboard.
Full terminal access to your GPU machines through the browser. No SSH keys, no VPN, no firewall headaches.
Run Codex, Claude Code, and OpenCode on your GPU machines, or use Workers AI for lightweight tasks on the edge.
Datasets and checkpoints on R2, HuggingFace Hub, or Google Drive. Zero egress on R2, S3-compatible for rclone.
Define training recipes as DAG workflows and execute them across your fleet with automatic checkpointing.
Link your GitHub account to auto-create Codespaces, deploy agents, and code remotely with AI assistance.
Search and compare GPU offers across providers in real-time. Launch instances with one click.
Community & datacenter GPUs, spot bidding, lowest prices
Secure & community cloud, on-demand and spot pods
Enterprise-grade GPU instances, Lambda Stack pre-installed
GPU Spaces for training, Hub for datasets and model storage
Sign in with Google, GitHub, Apple, Notion, or HuggingFace. No credit card required for the platform.
Get Started Free