GETTING STARTED · QUICKSTART
Quickstart
Submit your first LoRA fine-tuning job, watch it train, and download the adapter. Once you have a dataset ready this takes about fifteen minutes end to end.
1 · Install the CLI
Requires Python 3.10+.
$ pip install ownllm ✓ installed ownllm
2 · Sign in
$ ownllm login # opens a browser tab to complete device-flow auth ✓ logged in as you@company.com
The token is written to ~/.ownllm_cli/token.json with 0600 permissions. Sign out any time with ownllm logout.
3 · Prepare a dataset
JSONL, one example per line. For instruction-tuning, each line has instruction and output fields:
{"instruction": "Summarize this renewal clause...", "output": "In one sentence..."}
{"instruction": "Draft a reply to...", "output": "Thanks for..."}4 · Submit a fine-tuning job
$ ownllm finetune --data ./dataset.jsonl \ --task instruct \ --quality balanced \ --model 7b → uploading dataset · estimating cost → estimated cost ≈ 3.01 credits · confirm? [y/N]
The CLI prints a token-level cost estimate before the job starts. If you’d rather not confirm interactively, pass --dry-run to just see the estimate, or --no-wait to submit without blocking.
5 · Watch it train
$ ownllm status job_8c3f… · RUNNING · epoch 2/3 · eval loss 1.58
For a live tail: ownllm logs <job-id> --follow. Or open the job in the web app for a training-loss chart and phase timeline.
6 · Download the adapter
$ ownllm download <job-id> ✓ adapter saved to ./adapters/<job-id>/
The download contains the LoRA weights in safetensors format plus a config.json. Serve it on any vLLM or TGI stack — or wait for hosted inference, which lands after alpha.