omi by BasedHardware is an ambient AI platform that runs continuously in the background, capturing screen activity and audio conversations to build a persistent second-brain context layer. With 10K+ GitHub stars (+617 in one trending day April 18, 2026) and 300,000+ professional users, it occupies a distinct position in the AI assistant space: not a tool you invoke, but one that observes passively and surfaces relevant context when needed. Open-source (MIT), cross-platform (macOS, iOS, Android, wearables), and model-agnostic via OpenAI-compatible APIs. The hardware side — Omi necklace, Omi Glass — extends ambient capture to physical space, recording conversations you have away from your desk. Think of it as persistent memory infrastructure for knowledge workers who currently lose context the moment they close their laptop.
omi is an ambient AI platform — not an assistant you open and close, but a persistent observer that runs in the background across all your devices. It captures your screen activity and conversations continuously, transcribes them in real time, generates summaries and action items automatically, and gives you an AI chat that can answer questions about anything you’ve seen or heard, going back as far as your capture history extends.
The pitch: “a 2nd brain you trust more than your 1st.” At 10K GitHub stars and 300,000+ professional users, it’s found an audience in knowledge workers who lose context constantly — across meetings, browser sessions, and device switches.
Continuous capture: omi monitors screen activity and audio conversations without requiring manual activation. Everything is transcribed and indexed. The capture runs passively — you don’t need to decide what’s worth recording.
Persistent memory: Unlike per-session AI tools, omi’s memory persists indefinitely across devices. Ask it “what did we decide about the API architecture last Tuesday?” and it can reference your meeting transcript, browser sessions, and screen captures from that period.
Multi-device sync: Context captured on your Mac syncs to your phone. Conversations captured by an Omi wearable sync to your desktop. The context layer spans physical and digital environments.
Community plugins: An extensible plugin ecosystem lets third-party developers add integrations — task management, CRM sync, project management tools, and custom notification systems.
BasedHardware manufactures physical Omi devices (necklace and glasses) that extend ambient capture beyond desktop and phone. The necklace records conversations in physical spaces — meetings, calls, daily interactions — with the same transcription and summarization pipeline as the software. For users whose most important context happens away from screens, the hardware closes the capture gap.
Open-source MIT license. Self-hostable Python/FastAPI backend using Firebase, Pinecone, Redis, Deepgram, and OpenAI-compatible LLM APIs. The self-hosted path matters: ambient capture of all screen activity and conversations is sensitive data, and on-premises deployment keeps it under the user’s control.
Knowledge workers and developers who regularly lose context across meetings, devices, and sessions, and want an ambient layer that accumulates memory without requiring them to explicitly log or note things. Professionals whose most important information is in verbal conversations (meetings, calls) rather than in documents. Power users comfortable with ambient capture who want the convenience of context availability across every device they use.
Opinionated bundle of agent personas — pre-built role templates for engineering, marketing, ops, and research teams that ship as drop-in markdown files.
OpenAI's flagship AI assistant capable of conversation, analysis, coding, writing, and multimodal reasoning across text and images.
Claude Code session memory plugin that captures context, compresses it via Agent SDK, and injects relevant memory into future sessions automatically.