How InferMargin measures unit economics.
What we read, what we never touch, and how we keep cohort outputs defensible.
- Read-only API
- n ≥ 30 public
- Percentile-only
- Versioned
What InferMargin measures
The primary metric is a proxy of AI unit economics: LLM API COGS / AI product revenue. Secondary metrics include an inference-adjusted contribution margin proxy, inference cost per active paying customer, total LLM spend, and ARR / MRR growth rate.
This is a proxy, not full gross margin. We measure inference cost against AI product revenue; full COGS allocation (compute, storage, payroll attribution) comes later.
Data we collect
- LLM provider usage — tokens, model, cost — via the OpenAI Admin API (read-only) or the Anthropic Usage API (read-only).
- Stripe revenue — customers, subscriptions, invoices, products, prices, subscription items — via a Stripe Restricted Key (read-only).
- User-declared metadata — primary use case, ARR band, team size, cohort assignment.
Data we do NOT collect
- No prompts.
- No model outputs.
- No user-level content of any kind.
- No sensitive logs.
Cohort comparability rules
- Each organization is assigned a single primary use case (forced single-select in the discovery questionnaire) to keep cohorts comparable.
- Cohorts are versioned; reassignment between cohorts requires founder approval and a documented rationale.
- Cohorts that would be too narrow for safe publication are blocked — re-identification risk takes precedence over coverage.
Confidentiality thresholds
InferMargin never publishes company-level data. Public reports only include aggregated statistics from cohorts of n ≥ 30. Your private dashboard shows where you stand within a cohort, but no other company can see your raw metrics.
Cohort minimums (n ≥ 30 public, n ≥ 10 private) combined with percentile-only outputs are designed to prevent re-identification. Cohorts that risk being too narrow are blocked from publishing.
Your raw metrics are never published, never shared with other companies, and never displayed in another organization's dashboard. They are read via read-only API access, used to compute your private dashboard and the aggregated cohort percentiles, and protected by row-level security and encryption at rest.
- Public reports require n ≥ 30 within a cohort.
- Private benchmark access requires n ≥ 10–15 within a cohort.
- Output is always percentile-based (p25 / p50 / p75 / p90).
- Never absolute values. Never company-level data.
Methodology versioning
Metric definitions and cohort criteria are versioned. Every benchmark output references the exact methodology version used to compute it, so historical comparisons remain auditable as definitions evolve.
Next step
If you want to be part of the first cohort, answer the discovery questionnaire. If you prefer a 25-minute call, book a slot.