Private fine-tuning · alpha

Fine-tune a model on your data. Own the weights.

Upload a dataset, pick a base model, get back a LoRA adapter you can serve anywhere. CLI-first, per-token pricing, and your corpus never trains our base.

Per-token billingYour data never trains oursAdapters always downloadable
ownllm · finetunejob_8c3f…
$ ownllm finetune --data ./corpus.jsonl --task instruct --quality balanced --model 7b
uploading 4,812 examples (312 MB)
estimate ≈ 3.01 credits · confirming …
training · epoch 3 / 3 · eval loss 1.42
adapter ready · download with ownllm download
Built on
PyTorchPEFTvLLMFastAPIPostgreSQL
alpha · 2026
What you get

A trained adapter, an eval report, and your credits refunded on failure.

Trained adapter
Every completed job produces a LoRA adapter you own and can download. Self-host on your own inference stack from day one.
Filessafetensors · config
Eval report
Before / after samples on a held-out slice of your data. Training loss, eval loss, and per-step telemetry in the job page.
Surfaceapp · CLI · API
Refund on failure
Jobs that fail before the first gradient step (infra error, bad dataset) return reserved credits in full. No quiet burn.
Policypre-training failure · full
Two surfaces, one platform

Drive it from the CLI, or from the web app.

Web app

Wizard-driven job submission.

A four-step flow: dataset upload, base model, config, review. Price estimate before you commit. Live training loss chart and phase timeline while it runs.

  • Upload drag-drop JSONL · presigned S3
  • Configure rank · epochs · learning rate · seed
  • Review token count · credit estimate · confirm
CLI

Three commands, from dataset to adapter.

Install with pip install ownllm, log in via device flow, submit a job with flags, poll until done, download the adapter.

# install + auth
$ pip install ownllm
$ ownllm login
$ ownllm finetune --data ./data.jsonl --task instruct --model 7b
How it works

Three primitives. One trained adapter.

Train is live in alpha. Deploy and evaluate land next — the CLI commands are already wired; the hosted-inference backend is shipping after training feedback lands.

I · TRAIN

LoRA fine-tune on your dataset.

Instruction, chat, or continued-pretraining tasks. Base models from 3B to 70B. Your data is deleted from the runner after the job completes.

Statuslive
II · DEPLOY

Hosted inference endpoints.

Serve your adapter on vLLM without running GPUs yourself. OpenAI-compatible /v1/chat/completions. Priced per token.

Statussoon · post-alpha
III · EVALUATE

Signed eval bundles, per run.

Training loss, eval loss, before / after samples on a held-out slice — today. Signed bundles, drift gates, and A/B compare — next.

Statuspartial · alpha
Job report

Every run produces a report you can read in sixty seconds.

See the platform →
job_8c3f… · llama-3-8b-instructDONE
EVAL LOSS
1.42
−0.38 vs start
TOKENS SEEN
5.1M
2k rows · 512 · 5ep
RUNTIME
11m 42s
A100 SXM
COST
3.01 cr
$3.01 USD
before / after · eval sample 12REVIEW
PROMPT · held-out
Draft a renewal email to our top customer, referencing the Q3 MSA clause.
BASE · before fine-tune
Generic renewal template. Does not reference the MSA or the customer segment.
eval score · 0.42
ADAPTER · after fine-tune
Cites MSA §4.2 directly, uses the customer-segment tone guide from training data, includes the correct renewal date.
eval score · 0.88
Trust posture

Data posture — what we commit to, today.

Read the trust page →
YOUR DATA
Your corpus never trains our base models. Your queries never train our base models. DPA on request.
WEIGHTS
Every completed training job produces a downloadable adapter. Self-host on any inference stack — no lock-in.
ISOLATION
Each job runs in its own disposable container on a dedicated GPU. Dataset is deleted from the runner after the job completes.
COMPLIANCE
No certifications held today. Working toward SOC 2 Type I; customer-specific contractual commitments negotiable.
Pricing

One credit equals one dollar. Public per-token rates.

See full pricing →
CREDIT PACKS

Pay-as-you-go. No subscription.

Buy once, spend whenever. Larger packs earn a volume bonus of up to +30%. Credits never expire.

From $10 · 10 creditsSee packs →
SUBSCRIPTIONS

Monthly credits at a discount.

Starter, Pro, or Scale — bigger monthly allowance for a recurring commit. Top up with packs any time.

From $29 / mo · 35 creditsSee plans →

Fine-tune a private model on your data. Own the weights.

Create an account, add credits, and submit your first job — or read the quickstart first.