Upload a dataset, pick a base model, get back a LoRA adapter you can serve anywhere. CLI-first, per-token pricing, and your corpus never trains our base.
A four-step flow: dataset upload, base model, config, review. Price estimate before you commit. Live training loss chart and phase timeline while it runs.
Install with pip install ownllm, log in via device flow, submit a job with flags, poll until done, download the adapter.
Train is live in alpha. Deploy and evaluate land next — the CLI commands are already wired; the hosted-inference backend is shipping after training feedback lands.
Instruction, chat, or continued-pretraining tasks. Base models from 3B to 70B. Your data is deleted from the runner after the job completes.
Serve your adapter on vLLM without running GPUs yourself. OpenAI-compatible /v1/chat/completions. Priced per token.
Training loss, eval loss, before / after samples on a held-out slice — today. Signed bundles, drift gates, and A/B compare — next.
Buy once, spend whenever. Larger packs earn a volume bonus of up to +30%. Credits never expire.
Starter, Pro, or Scale — bigger monthly allowance for a recurring commit. Top up with packs any time.
Create an account, add credits, and submit your first job — or read the quickstart first.