MingLLM logo MingLLM ALPHA

Investor Relations

We’re building reliable, arbitration-first AI that users trust. This page provides an overview for current and prospective investors.

Monthly Active Users
Private beta
Messages Processed
Cumulative to date
Retention Rate
30-day retention

Company overview

MingLLM delivers instant, reliable AI answers using a judge-and-arbitrate approach across multiple models. Our web client demonstrates the experience; APIs are planned.

  • 2025 — Alpha launched; conversational web client live.
  • Safety & quality improvements through arbitration framework.
  • Early enterprise interest for eval & compliance workflows.

Leadership

Founder & CEO: Yiming Beckmann (product, infra). Advisors include leaders from AI safety, evals, and distributed systems.

We value reliability, clarity, and fast iteration.

Governance

Private company. Charter documents and policies are published below.

Policy highlights (ToS & Privacy)

  • Eligibility & jurisdictions. Access may be restricted where prohibited by law (e.g., sanctioned countries).
  • Security notifications. If you suspect credential compromise, notify us at support@mingllm.com immediately.
  • APIs & rate limits. Limits may be enforced by throttling or temporary suspension; we’ll notify you when practical.
  • Inputs vs. outputs. Don’t input sensitive personal data unless necessary. Outputs are probabilistic and not guaranteed.
  • Model improvement. Any opt-in training uses de-identification where possible; opt-out is respected for future processing.
  • Data rights. Request access, correction, or deletion via support@mingllm.com (we target responses within 30 days).
  • Termination. To close an account and request data deletion, email support@mingllm.com.
  • Disputes. We encourage good‑faith resolution first; mediation or arbitration may apply as described in the Terms.
  • International transfers. Cross‑border transfers use appropriate safeguards (e.g., SCCs); see Terms for current frameworks.

This summary is informational; the Terms of Service govern.

Code of Conduct Updated: 2025‑10‑25

MingLLM is committed to a welcoming, safe, and productive environment for employees, contributors, users, and partners. This Code applies to all spaces run by MingLLM, including apps, forums, events, and repositories.

  • Be respectful. Disagree without personal attacks. Harassment, hate speech, or discrimination based on protected characteristics is not tolerated.
  • Assume good intent; address impact. If harm occurs, prioritize the impacted person’s experience and resolve directly and quickly.
  • Zero tolerance for harassment. This includes stalking, doxxing, threats, sexualized language or imagery, and sustained disruption.
  • Inclusive communication. Use clear, accessible language; avoid slang or idioms that exclude. Respect pronouns and names.
  • Conflicts of interest. Disclose material conflicts. Do not use insider information for personal gain.
  • Security & privacy. Protect confidential data. Follow least-privilege access and report suspected incidents immediately.
  • Responsible disclosure. Report vulnerabilities to support@mingllm.com. Please avoid testing that could harm users or systems.
  • Anti-corruption. Do not offer, request, or accept bribes or kickbacks. Follow applicable anti-corruption laws.
  • Reporting & enforcement. Email support@mingllm.com. Violations may result in warnings, temporary or permanent bans, or termination, at MingLLM’s discretion.

Questions? Contact support@mingllm.com. This Code complements, and does not replace, the Terms of Service.

Safety & Model Use Policy Updated: 2025‑10‑25

Our goal is to enable helpful, safe AI. These rules govern how MingLLM models and products may be used. They apply to all users and API clients.

Acceptable Use

  • Use models for lawful purposes that respect others’ rights and safety.
  • Keep a human in the loop for consequential decisions (e.g., medical, legal, financial, or safety-critical contexts).
  • Disclose AI assistance when it may affect trust, attribution, or accountability.

Prohibited or Restricted Use

  • Illegal activities or harm. No use to plan, commit, or facilitate illegal activities, violence, or self-harm.
  • Dangerous content. No creation or distribution of instructions that meaningfully enable wrongdoing (e.g., weapons construction, malware creation, evasion of safety systems).
  • Child sexual content or exploitation. Absolutely prohibited and will be reported where required.
  • Hate or harassment. Do not generate abusive content targeting protected classes or individuals.
  • Privacy violations. No collection or disclosure of personal data without proper consent or legal basis. Do not attempt to re-identify anonymized data.
  • Deceptive behavior. No impersonation of individuals or organizations; no undisclosed deepfakes; no coordinated inauthentic behavior.
  • High-risk advice. The service is not a substitute for professional advice. Do not rely on it for medical, legal, or financial decisions without qualified review.
  • Security abuse. Do not probe, scan, or overload MingLLM or third-party systems. Follow our responsible disclosure policy.
  • Circumventing safeguards. Do not attempt to bypass content filters, rate limits, or access controls.

Data & Privacy

  • Only submit data you have the right to share. Remove sensitive data unless explicitly covered by a written agreement.
  • We may use aggregated telemetry to improve safety and reliability. See the Terms of Service for details.

Transparency & Limitations

  • Outputs may be incorrect or outdated. Verify important information from primary sources.
  • Models may reflect biases present in data. Please report harmful outputs so we can improve mitigations.

Reporting

  • Report safety or security concerns to support@mingllm.com (use subject line “SAFETY” or “SECURITY”).

This policy is part of the Terms of Service and may change. Material changes will be communicated in-product or via email where appropriate.

FAQ

What stage is MingLLM in?

Alpha. We’re validating arbitration quality, latency, and cost curves before broader rollout.

How do you evaluate model quality?

We compare candidates across multiple models and select with a lightweight judge. Human evals help calibrate domains and prompts.

Are you raising capital?

If you’re an interested investor, reach out below. Materials can be shared under NDA.

Contact investor relations

Email us at support@mingllm.com for intros, diligence requests, and press.

We’ll only email occasionally. Unsubscribe anytime.