Isolated runner per job.
Every training job runs in its own disposable container on a dedicated GPU. No shared state with other customers.
We don’t claim certifications we don’t have. This page tells you exactly what OwnLLM does with your data today, what’s on the post-alpha roadmap, and where customer-specific commitments can be negotiated by contract.
Every training job runs in its own disposable container on a dedicated GPU. No shared state with other customers.
Your corpus and any future inference traffic are never used to improve our base models. DPA draft available on request.
The training dataset is removed from the runner the moment a job completes. The adapter artifact is retained for download.
Every completed job produces a downloadable LoRA adapter in standard safetensors format. Self-host on any compatible stack.
OwnLLM is an alpha platform. No SOC 2 report has been issued yet, and we’re not going to claim otherwise. The table below is where each framework stands today.
Datasets and adapter artifacts live in object storage with server-side encryption. Credentials and secrets are encrypted at rest in the database.
TLS on every public endpoint. Uploads use short-lived presigned URLs scoped to a single job.
Each training job is a fresh container on a dedicated GPU, provisioned for that run only. The runner sees the dataset only while it trains.
Your data is used only to train the adapter you ask for. Not to train our base models. Not to improve platform models. Never.
During alpha, incident notifications go out by email to affected customers within 24 hours of confirmation, with a written post-mortem within 10 business days. A public status page lands alongside the hosted-inference release.
Security reports: security@ownllm.com — we respond within 72 hours and credit reporters on disclosure if they’d like that.