TrainShell
GPU Training Orchestration

Automate your training

Provision GPU instances across Vast.ai, RunPod, Lambda, and HuggingFace. Run training jobs, manage storage, and deploy AI coding agents — all from one platform.

train.sh

$ train launch --provider vastai --gpu "RTX 4090" --count 2

Searching for 2x RTX 4090 on Vast.ai...

Found offer: $0.32/hr, US-West, 99.8% reliability

Instance created: i-a7f3b2c1

Installing trainsh-agent...

Agent connected. Host "gpu-4090-west" is online.

$ _

GPU

GPU Fleet Management

Provision and manage GPU instances across Vast.ai, RunPod, Lambda, and HuggingFace Spaces from one dashboard.

CLI

Browser Terminal

Full terminal access to your GPU machines through the browser. No SSH keys, no VPN, no firewall headaches.

AI

AI Coding Agents

Run Codex, Claude Code, and OpenCode on your GPU machines, or use Workers AI for lightweight tasks on the edge.

R2

Multi-Backend Storage

Datasets and checkpoints on R2, HuggingFace Hub, or Google Drive. Zero egress on R2, S3-compatible for rclone.

DAG

Job Orchestration

Define training recipes as DAG workflows and execute them across your fleet with automatic checkpointing.

DEV

GitHub Codespaces

Link your GitHub account to auto-create Codespaces, deploy agents, and code remotely with AI assistance.

Multi-Cloud GPU Marketplace

Search and compare GPU offers across providers in real-time. Launch instances with one click.

Vast.ai

Community & datacenter GPUs, spot bidding, lowest prices

RunPod

Secure & community cloud, on-demand and spot pods

Lambda

Enterprise-grade GPU instances, Lambda Stack pre-installed

HuggingFace

GPU Spaces for training, Hub for datasets and model storage

Ready to train?

Sign in with Google, GitHub, Apple, Notion, or HuggingFace. No credit card required for the platform.

Get Started Free