openhuman/tinyhumansai is a single Rust binary you drop on your machine and run as a personal AI runtime — model orchestration, memory, and a chat shell, all local, all under one process. The framing the project leads with is 'personal AI super-intelligence': not an enterprise agent, not a cloud control plane, but a private/simple shell that one person owns end-to-end. It launched on May 12, 2026 and hit GitHub trending the same day at +1,042 stars in the first 24 hours. Architecturally, it sits in the same shell-layer slot that Anthropic is now contesting with Cowork and the Claude Code Agent View research preview — but instead of routing through hosted Anthropic infrastructure, openhuman wires together local model backends (llama.cpp / mlx / candle), an on-disk memory store, and a TUI chat surface in one statically-linked Rust executable. The bet is that the same buyer who could pick the Anthropic-shell stack will instead pick a private/simple binary they fully control — the same trade-off that drove the early local-LLM wave, now applied to the multi-agent shell layer. It pairs naturally with cc-switch (provider switching), agentmemory (persistent memory substrate), and AionUi (multi-CLI chat shell) as the open-stack alternative to a hosted Anthropic-dispatched workflow.
openhuman/tinyhumansai is a Rust-binary local-AI shell that hit GitHub trending on its launch day (May 12, 2026) with +1,042 stars in the first 24 hours. The pitch is unusually direct: a single executable that is your personal AI runtime, framed as “personal AI super-intelligence.” No login, no cloud, no SaaS. You download the binary, run it, and you have a chat shell, persistent memory, and a model orchestration layer — all in one process under your full control.
The week openhuman launched is the same week Anthropic moved decisively into the agent-shell layer with Cowork (the flight-booking demo on Opus 4.7 that, per bcherny, “didn’t fall over”) and the Claude Code Agent View research preview. The agent stack is now being contested at two layers simultaneously: the engine layer (Claude Code, Codex, DeepSeek-TUI) and the shell layer (Cowork on the hosted side; cc-switch, AionUi, openhuman, and the assemble-your-own stack on the open side).
openhuman is the most opinionated open-stack answer to the hosted-shell question. Where AionUi and cc-switch multiplex existing AI CLIs, openhuman ships its own runtime end-to-end as one binary. Where Cowork orchestrates multiple Anthropic agents through a hosted dispatcher, openhuman runs a single local agent under one user account on one machine. The buyer trade-off is identical to the one that drove the first local-LLM wave: you trade peak capability for ownership.
openhuman is not an island. The natural pairing is:
This composability is the open-stack pattern: pick the layers you want, swap any one of them, never get locked into a single vendor’s hosted shell.
Developers and power users who want a local-first agent runtime with no SaaS dependency, no hosted control plane, and no per-seat licensing — and who value a single Rust binary over a docker-compose stack of microservices. Privacy-sensitive operators (legal, medical, financial) who cannot route prompts through hosted inference. Builders who want to experiment with a fully open multi-agent shell as the architectural alternative to Anthropic’s hosted Cowork model.
For the broader stack-level picture — how Cowork’s launch is forcing the open shell layer to consolidate, and where openhuman fits — see Cowork Just One-Shotted a Flight Booking on Opus 4.7 — Anthropic’s Shell-Layer Move and What It Means for the Agent Stack.
Persistent memory layer for AI coding agents — benchmark-backed (95.2% on LongMemEval-S), 92% fewer tokens per session vs full-context pasting, zero manual memory.add() calls.
Open-source AI pair programming tool that works in your terminal to edit code across your entire repository.
AWS's AI-powered coding assistant that helps developers build, deploy, and optimize applications on AWS with code generation and transformation.