Roadmap

What's next.

We publish the roadmap so the tradeoffs are legible. Shipping dates are honest estimates, not promises; scope is the variable.

Q2 2026
shipping

Jarvis voice + Tensor browser public preview

Invite-gated public preview. Jarvis + Tensor + Orb memory on macOS. First 1000 invites go out in waves.

Q2 2026
shipping

Receipts log across all surfaces

Unified log of every action MingLLM takes across voice, browser, and code. Undo where safe, review where it matters.

Q3 2026
next

Tensor Code general availability

The CLI coding agent graduates from internal use to public. Repo-wide context, diff-based edits, test-runner integration.

Q3 2026
next

8GB memory mode

Current builds target 16GB unified memory. Q3 lowers the floor so MingLLM runs on 8GB M1 / M2 Airs at reduced but usable quality.

Q4 2026
next

Gemma-4 27B MoE base

Upgrade the base model from 4B dense to a ~4B-active-parameter MoE. Same latency, meaningfully stronger reasoning.

2027
later

Windows + Linux

We built for macOS first to keep scope tight. Windows and Linux builds are queued after the v1 release is stable.

2027
later

Shared memory (household + team)

The current Orb is strictly single-user. Multi-principal memory — shared between family members or small teams — is a 2027 research project.