A small team building a private fine-tuning platform.
We’re in alpha. The platform trains LoRA adapters on your data and gives you back weights you own. We’re heads-down on making the training experience boring and predictable before we ship hosted inference on top of it.
What we believe.
Private by contract.
Your corpus and your queries never train our base models. Written into the DPA, not just promised.
Weights are yours.
Every completed training job produces a downloadable adapter. You can walk away to your own stack any time.
Operable by a small team.
One CLI, one API, one web app. No platform team required to run a fine-tune in production.
What we’re shipping now.
See the platform →LoRA training on 3B–70B bases
Instruction, chat, and continued-pretraining tasks. Per-token pricing, refunds on pre-training failure.
Adapter download from day one
safetensors + config files delivered to your account the moment a job completes.
Hosted inference · next
vLLM-backed, OpenAI-compatible endpoints for users who don't want to run GPUs themselves.
Get in touch.
We’re a small team. The fastest way to reach us is the contact form — it goes straight to founders during alpha.
Legal & policies.
Formal policy documents will be published before the platform exits alpha. In the meantime, current drafts are available on request for customers evaluating the platform for production use.