On May 14, 2026, OpenAI shipped a preview of Codex inside the ChatGPT mobile app on iOS, iPad, and Android, available on all plans including Free and Go. The phone becomes a thin controller over a Codex desktop session running on a paired…

On May 14, 2026, OpenAI rolled out a preview of Codex inside the ChatGPT mobile app on iOS, iPad, and Android – free tier included. The codebase never leaves your machine. Your phone becomes the surface where you start, steer, approve, and review work that runs on a paired Mac. For solo operators, this is the moment asynchronous coding stopped being a workflow trick and became the default shape of the day.
May 14, 2026: OpenAI shipped a Codex preview inside the ChatGPT mobile app on iOS, iPad, and Android, available on every plan including Free and Go. That same day, Remote SSH and Hooks moved to GA.
Read the architecture carefully, because the headlines miss it. The agent did not move to the phone. The controller moved to the phone. The agent stays on the paired Mac running the Codex desktop app. The phone is a thin live view over every active Codex thread on that host: prompt entry, model switching, file diffs, terminal output, screenshots, approvals – all streaming in real time from the Mac.
Pairing is a QR scan from the host plus a passkey or SSO in ChatGPT. Fifteen minutes end to end.
The constraint worth flagging: today the phone can only connect to a Codex desktop running on macOS. Windows host pairing is "coming soon" per Thurrott's coverage. Until that ships, the mobile preview is functionally Mac-only.
Before this week, "async coding from your phone" meant SSH into tmux or remote desktop into your laptop. Both options force you back into a typing posture on a screen that was never designed for it. You stop doing the thing you were doing – gym, meeting, errand – and you sit down to type.
The mobile preview is the first build where a thirty-second approval costs thirty seconds and nothing else. The surface is shaped for it. Tap to approve. Swipe back to whatever you were doing. The dev box keeps running.
Here is the reframe most of the coverage misses. The workday of a solo operator running agents is not "hours at the keyboard." It is roughly two minutes of decision sandwiched between five minutes of waiting and ten minutes of context-switching. The decisions are small: yes ship this PR, no rewrite this function, use Opus for this one not Sonnet, stop – that file should not be touched. Until this week, every one of those decisions required being in front of the dev box.
Now the decision detaches from the dev box and follows you. Solo-operator output stops being a function of hours at the keyboard. It becomes:
agent-hours run unattended × speed of human signal return
Hooks going GA the same day is the other half of that math. Scriptable gates: scan prompts for secrets, run a validator before commit, log every shell call, block writes to specific directories. Each Hook you wire up is one fewer manual approval. Fewer manual approvals mean more scale per operator.
This is the same shift Notion is making on a different surface – see the August 11 credit cliff piece for the workspace-as-agent-control-plane version of it. Different vendor, same physics.
OpenAI did not build a "Codex Cloud" tier that holds your repository in a sandbox. They built a relay that pairs an authenticated phone to an authenticated Mac you already trust. That's the choice worth reading.
Straight from the docs:
Repository files and local documents come from the connected host. Shell commands run on that host or remote environment. Any plugin installed on that host is available when you use Codex remotely. MCP servers, skills, browser access, and Computer Use come from that host's configuration.
Files, credentials, permissions, local setup, plugins, MCP servers, signed-in websites – all stay on the host. The phone holds nothing.
The corollary: if your Mac sleeps, the agent stops. If it loses network, the agent stops. There is no fallback execution context in the cloud. The constraint is a tax on flakiness – you have to keep one Mac awake and online for the agent to be reachable.
For regulated work this is the right design. HIPAA-compliant Codex use for eligible ChatGPT Enterprise workspaces ships today, but only when Codex is used in local environments. The mobile preview qualifies because nothing leaves the host. A cloud-tier Codex would have forced a separate compliance review for half the customer base.
The operator answer to the always-on-Mac requirement is unglamorous and cheap: a Mac mini left on the shelf at home. Around $700 once. Or a managed devbox if you prefer not to own hardware. The Mac mini wins for most.
Codex stays included in ChatGPT Plus, Pro, Business, Edu, and Enterprise. What's new on May 14 is that the mobile preview, Remote SSH GA, and Hooks GA all ship on every plan including Free and Go.
No separate mobile add-on charge. No per-host fee. No minutes meter. Programmatic access tokens are limited to Business and Enterprise, the right place for them – that's where teams burn themselves on token-based CI.
The economic posture is clear: developer surface free, team-scale governance paid. For a solo operator already on Plus or Pro, this is a free upgrade. For a founder weighing Cursor's IDE-bound flow against Codex's anywhere-controller flow, the math tilts toward Codex this week – not on model quality, but on where the work can happen.
If you already pay for ChatGPT Plus or higher and you use a Mac, update Codex desktop and ChatGPT mobile, then pair them. One QR scan plus a passkey. Fifteen minutes.
Pick one repository where the work decomposes cleanly into "agent runs, human approves" loops – a content pipeline, a refactor, a test backfill. Wire Hooks on that repo first. Block .env reads. Require approval for any rm -rf. Force a validator on commit. Every Hook you write is one fewer interruption.
Decide whether your daily-driver Mac is the host or whether you buy a Mac mini as the always-on host. If you ever close your laptop, the answer is the Mac mini. $700 buys back the agent-hours you'd otherwise lose to a shut lid.
Stop measuring "lines of code per day" or "hours at the keyboard." Start measuring agent-hours run and approvals issued per day. The second number is the real ceiling on your output now.
Three signals over the next thirty days.
Windows host pairing. Until it ships, the preview is functionally Mac-only and keeps roughly half the developer population out. The day it ships is the day this stops being a niche operator move and becomes table stakes.
Whether OpenAI extends the relay model to let one host control another for full failover. The docs already hint at it. If your "host" can be an always-on Mac mini that hands off to your laptop when you sit down, the always-on requirement becomes invisible.
The Anthropic response. Claude Code does not have a first-party mobile controller today. If Anthropic ships one in the next thirty days – and the Stainless acquisition suggests they are investing in the surfaces layer – then asynchronous-coding-by-default becomes vendor-neutral and the choice collapses back to model quality. If they don't, the gap is real and it compounds for every week it stays open.
May 15, 2026
News

PwC and Anthropic announced a multi-year alliance expansion on May 14, 2026 that puts Claude Code and Cowork into hundreds of thousands of PwC professionals' hands, with 30,000 US staff certified inside the year. The release publishes…
May 15, 2026OpenAI shipped Codex inside ChatGPT mobile on May 14, free tier included. The phone is now the agent control plane for solo Mac operators.
May 15, 2026
On May 13, 2026, Notion launched its Developer Platform — Workers (hosted code runtime), External Agents with partner integrations for Claude Code, Cursor, Codex and Decagon, an External Agent API for custom agents, a CLI called `ntn`,…
May 15, 2026