Roadmap
What's next.
We publish the roadmap so the tradeoffs are legible. Shipping dates are honest estimates, not promises; scope is the variable.
Jarvis voice + Tensor browser public preview
Invite-gated public preview. Jarvis + Tensor + Orb memory on macOS. First 1000 invites go out in waves.
Receipts log across all surfaces
Unified log of every action MingLLM takes across voice, browser, and code. Undo where safe, review where it matters.
Tensor Code general availability
The CLI coding agent graduates from internal use to public. Repo-wide context, diff-based edits, test-runner integration.
8GB memory mode
Current builds target 16GB unified memory. Q3 lowers the floor so MingLLM runs on 8GB M1 / M2 Airs at reduced but usable quality.
Gemma-4 27B MoE base
Upgrade the base model from 4B dense to a ~4B-active-parameter MoE. Same latency, meaningfully stronger reasoning.
Windows + Linux
We built for macOS first to keep scope tight. Windows and Linux builds are queued after the v1 release is stable.
Shared memory (household + team)
The current Orb is strictly single-user. Multi-principal memory — shared between family members or small teams — is a 2027 research project.