Human says it
One spoken request, no prompt window, no setup ceremony.
Watch the Mac change
Hold Control + Option, say the task, then watch the result land on screen. ipop.ai opens apps, types, clicks, and launches background agents from one voice command.
"Make a note to call mom tonight."
Notes opened. The note appeared.
Founder demo mode
Built by Gagan Arora. No dashboard. No prompt box. Voice in, visible Mac result out.
Actual beta clip
A short looping clip sits first because ipop.ai only lands when people can watch the Mac do the thing: hear the request, see the action, verify the result.
Hold Control + Option. Speak while the mic is lit.
Say one useful thing, like "make a note to call mom tonight."
The Mac opens the app and creates a result the viewer can verify.
Then scale the same feeling into two visible background sessions.
Why it feels different
You hear the request, see apps move, and verify the after-state yourself. That is the handoff from human intent to real Mac action.
One spoken request, no prompt window, no setup ceremony.
Apps open, text appears, keys press, pages move. You can see the outcome.
For larger tasks, one command can start parallel sessions you can inspect.
Second act
The live demo should escalate from a simple Mac action to a control-room moment: two sessions start, both keep working, and you can inspect what each one found.
"Launch two agents: one finds prospects, one researches similar companies."
running
running
"make a note to call mom tonight"
"open Chrome and search Razorpay pricing"
"summarize what is on this page"
"explain this Xcode error"
"launch two agents: one for prospects, one for competitors"
"open the release checklist and mark the DMG test done"
Security and privacy
ipop.ai can listen, inspect approved screen context, and control apps. The beta page says plainly where that information goes.
Dictation uses the configured speech provider. The app prefers AssemblyAI when configured and can fall back to Apple Speech on your Mac.
When you ask about visible content, screenshots or screen text may be sent to the selected AI provider so it can answer or choose an action.
App launches, key presses, typing, clicks, and scrolling run through macOS permissions on your machine. A remote server does not control your desktop.
Early builds use providers you configure, including local CLI sessions or API keys. Hosted credits and billing are not live yet.
Interaction
Hold Control + Option while speaking. Release to send.
ipop.ai chooses a local Mac action, an AI answer, or a background agent session.
You see the Mac action, hear the reply, or inspect the spawned session.
Founder beta
Pricing for hosted credits and team features will come after the beta proves the core workflow.
Free beta
$0
Try voice control, local Mac actions, screen-aware help, and light agent runs.
Download betaPro
Soon
For hosted credits, heavier parallel agents, and priority support once billing is ready.
Ask about ProCurrent beta
The website points to a real GitHub release asset named ipop-ai-beta.dmg.
Hold Control + Option, speak, then release to send the request.
The beta includes background agent work, with reliability still being hardened.
FAQ
Current beta target: Apple Silicon Mac with macOS 14 or newer.
Microphone for dictation, Accessibility for Mac control, and Screen Recording for screen-aware requests.
For the beta, yes. Provider setup is local to your Mac. Hosted plans will come later.
The beta may trigger a macOS Gatekeeper warning. If that happens, right-click the app and choose Open once.
No. The download button is direct. If you email hello@ipop.ai, that message is handled as support mail.
That is part of the beta goal. Background sessions are being hardened quickly, so expect sharp edges.
Download
Direct download from GitHub Releases. If macOS blocks the first launch, right-click the app and choose Open.
Download beta DMG