/_astro/cruise_missiles.DZCK1KPf.png

Autonomous Coding Agents: Fire and Forget

Autonomous coding agents let you fire off coding tasks and walk away. Safe YOLO mode changes everything.

JACK IVERS ESSAY 6 MIN READ

Willison Causing Trouble Again

This post from Simon Willison hit my radar yesterday morning and it struck a chord with me. AI coding tools are a research sweet spot for me–that is, I try very hard to stay up. I’ve been working with the same cloud-based agents from OpenAI and Anthropic that are the subject of Willison’s post.

Simon Willison's code research post

What to Call These Things?

Willison uses the term “asynchronous coding agents” for the category that encompasses OpenAI’s Codex Cloud and Anthropic’s Claude Code On The web. “Asynchronous” didn’t seem ideal to me, so I decided to check in with my friend Opus to see what he knew about the topic. Turns out that Opus 4.1 has a very solid understanding of the concept and how its being described. Opus and I prefer autonomous to asynchronous, so: “autonomous coding agents.”

AI Conversation Transcript: Claude.ai

U
User

I'm about to write about Claude Code for web, Codex Cloud and the like. I'll be referencing this post from Simon Willison which you will need to review carefully. https://simonwillison.net/2025/Nov/6/async-code-research/

My question: is there a better term for Claude Code for web, Codex Cloud and the like than Willison's "asynchronous coding agents"? Please read and research and let me know your thoughts.

AI
Claude.ai

I'll read Simon Willison's article and research this terminology question for you.

[Fetched and analyzed the article at https://simonwillison.net/2025/Nov/6/async-code-research/]

[Performed web searches for "Claude Code for web autonomous coding agent", "autonomous coding agents fire forget coding", and ""delegated coding" "headless coding agent""]

After reviewing Willison's article and researching how the industry is describing these tools, I think his term "asynchronous coding agents" is solid but there are some alternatives worth considering:

Willison defines these as agents that "operate on a fire-and-forget basis" where "you pose it a task, it churns away on a server somewhere and when it's done it files a pull request". The key distinction he makes is between interactive agents (like CLI tools) versus these that run independently.

Looking at how the vendors and community describe them, I found these patterns:

"Autonomous coding agents" seems to be the most widely adopted alternative. Martin Fowler's team uses "autonomous background coding agents" to describe "headless agents that you send off to work autonomously through a whole task". Google prominently uses "autonomous" for Jules, and GitHub, Zencoder, and others follow this pattern.

Pros/cons of each:

  • "Asynchronous" - Technically accurate, emphasizes the parallel execution model. Good for developers who immediately understand async patterns. But might be less intuitive for broader audiences.
  • "Autonomous" - Emphasizes the self-directed, independent nature. More immediately understandable. One article notes this "set it and forget it" capability "defines what makes Jules a true autonomous agent, not just another autocomplete helper".
  • "Background" - Very clear about execution model but sounds less sophisticated.
My take: For your writing, I'd suggest using "autonomous coding agents" as the primary term, with "asynchronous" as a clarifying technical detail. You could introduce it as: "autonomous coding agents (what Willison aptly calls 'asynchronous coding agents')" to acknowledge both terms.

This captures both the independence aspect that matters to users AND gives a nod to the technical async execution model that developers care about. Plus, "autonomous" aligns with how most vendors are positioning these tools as we move toward more agentic AI development.

U
User

Is the name actually "Claude Code for web" or what?

AI
Claude.ai

Looking at the official sources, the correct name is "Claude Code on the web" (not "for web").

Anthropic's official announcement states: "Today, we're introducing Claude Code on the web, a new way to delegate coding tasks directly from your browser". Their engineering blog also consistently uses "Claude Code on the web".

Simon Willison uses "Claude Code for web" in his article, but that appears to be his informal shorthand rather than the official product name. The actual product is accessed at claude.ai/code and is officially called "Claude Code on the web".

So for your writing, you should use:

  • "Claude Code on the web" (official name)
  • "Codex Cloud" (OpenAI's product - though sometimes just called "Codex")
  • "Jules" (Google's product)
  • "Copilot Workspace" (GitHub's autonomous agent, distinct from regular Copilot)

Background: The Coding Wars

Anthropic has invested heavily in the AI coding space. Its langauge models surprised many of us by jumping ahead of OpenAI in this space when Claude Sonnet 3.51 launched in in June 2024. More surprisingly, Anthropic stayed ahead, progressing rapidly through Sonnet 3.7, 4, and most recently Sonnet 4.5 just over a month ago. I have used Anthropic coding models almost exclusively since 3.5 arrived.

Earlier this year, Anthropic jumped deeper into the fray with an actual coding tool, the minimalist coding agent Claude Code, whose motto might be:

Get out of the way, coding tools, and let our great models do their thing.

Not long after Claude Code arrived, it supplanted Cursor2 as my primary AI coding interface.

So where was OpenAI, the biggest AI player of all? Since Sonnet 3.5 arrived: playing catch-up. I had a lot of respect for OpenAI models generally, for ChatGPT as a product, and for the robustness of their APIs and inference services. But coding belonged to Anthropic.

Not to say that OpenAI was standing still. Their release of GPT-5 this August was huge, clearly a substantial step forward, and some were saying GPT-5 rivaled Anthropic’s coding models. OpenAI then released a coding-tuned model, GPT-5 Codex, in mid-September. And OpenAI had, in a series of bizarre fits and starts, released its own minimalist coding agent, Codex CLI. But to me, this all felt like catch-up: just we-try-to-do-everything posturing from OpenAI.

Coding Agent Autonomy Changes the Game

What drove me to reconsider Codex was an ongoing stream of podcast and YouTube interviews where some person in OpenAI leadership (including president Greg Brockman!) would be interviewed about some topic other, but the conversation would drift over to Codex along the way, and we’d hear how OpenAI was dogfooding Codex internally to build OpenAI’s own products. Here’s an example from OpenAI DevDay on October 7th, where Sherwin Wu and Christina Huang of the OpenAI Platform Team get onto the subject of Codex:

Play

What jumped out at me was that OpenAI’s core internal teams were using Codex heavily, in autonomous agent mode; fire-and-forget. They told stories about going into meetings, and being asked to “give me a couple minutes here …” so people could fire off a few Codex Cloud cruise missles that would grind away during the meeting, and have results waiting for them afterwards.

Experiencing a series of legit, unstaged anecdotes like this—I’m pretty good at smelling “marketing message” and these definitely weren’t—my attitude about Codex went from “meh” to “NEED TO INSTALL ASAP.” Before I got to Codex Cloud, though, Anthropic jumped the line and launched their ACA, “Claude Code On The Web”3. Since I was in deep with Claude Code already, getting its web counterpart set up was low effort, and CCOTW ended up as my first Autonomous Coding Agent. Codex Cloud followed soon after.

Fire-and-forget That Works

In OG Claude Code, you can already fire-and-forget. It’s called YOLO mode. You turn off all protections, live dangerously, and let Claude do whatever it chooses to, which in a command line or terminal interface, is pretty much anything. Nuke the system drive? Sure thing. Wipe the GitHub repo? Yep. Shut down your AWS cloud? No problem. Claude Sonnet 4.5 is an ethical AI-person, better than most, but YOLO mode is dangerous and developers avoid it most of the time.

We mostly do our core work in non-YOLO mode, which involves carefully watching what the agent is doing, and many many approvals of the agent’s polite requests, “can I use this potentially dangerous tool, please?” I get a lot of of work done when I pair with Claude Code, but I’m close to 100% occupied keeping tabs on the model while I’m pairing.

Codex Cloud autonomy

What OpenAI and Anthropic really did with ACAs is deliver a reasonably safe YOLO mode, by equiping their agents with reasonably secure environments that the agent can go to town in, without risking much. “Go to town” means “run whatever you want, safely” – usually called sandboxing – but also “code whatever you want, safely – you can’t break anything.” This works because the agent does its work in its own personal code space, kept safely separate from the current codebase and other work-in-process. Nothing there impacts our real code, until we’ve had a chance to carefully look at it and let it in (to be merged).

Well-equipped Environments, Well-defined Specs

Willison’s piece dragged me the rest of the way into ACAs; I did the research and set up my first custom Codex Cloud environment just today. Until now, I’ve been living with Codex’s default setup with no customization and conservative default protections.

The actual focus of Willison’s post is using autonomous coding agents for coding research projects, and he walks us through several interesting real-life examples. I like his suggestion to use a separate Git repo for these projects—one more layer of “don’t worry, you can’t break anything.”

The human role in this autonomous new world? Clear, well-thought-out specs that precisely communicate what the agent should build for you. There’s nothing preventing you from having 10 or 20 agents working away—except 10 or 20 good, clear specs.


Crazy panes dev setup

Footnotes

  1. I do know that Anthropic called it Claude 3.5 Sonnet. In this case, I choose “avoid confusion and awkwardness.”

  2. Well, I didn’t abandon Cursor exactly. My current insane, wonderful dev setup looks like this: container app: Cursor; left pane: Codex CLI; center pane: source files; right pane: Claude Code

  3. It’s hilarious how bad both OpenAI and Anthropic are with product names

Back to Archive

Related Posts

View All Posts »