/_astro/alt_jessep_lunch2.BmCKXUgT.jpg

Really Sam? Code Red? We Have Questions.

Starting with, are you familiar with A Few Good Men?

JACK IVERS ESSAY 16 MIN READ

On Monday, Sam Altman sent an internal memo declaring a “Code Red” at OpenAI. This is, apparently, the highest level of internal panic, superseding the previously declared “Code Orange.”

I have questions.

Let’s start with the obvious one: Has Sam Altman seen A Few Good Men?

Because in A Few Good Men, a “Code Red” is not a heroic rallying cry. It’s not the CEO equivalent of “Avengers, assemble!” A Code Red is an illegal hazing order that results in a Marine being gagged, beaten, and killed. The entire movie is about how everyone who touched the Code Red (the Colonel who ordered it, the Lieutenant who passed it down, the Marines who carried it out) ends up arrested, discharged, or dead by suicide.

A Few Good Men movie poster reimagined for the AI wars

So when the CEO of one of the most valuable startups in history announces he’s ordering a Code Red, my first question is: Did anyone in comms raise their hand?

Sir, I love the energy, but in the movie this ends with Jack Nicholson being dragged out of a courtroom in handcuffs.

Sir, the Code Red literally kills someone.

Sir, the famous line from this movie is ‘You can’t handle the truth,’ which is what the villain screams right before being arrested.

Second question: What color comes after red?

Apparently Altman has a whole color-coded panic system. They were previously at Code Orange. Now they’re at Code Red. Is there a Code Crimson? Code Burgundy? Code Arterial Spray? Code “Everyone Update Your LinkedIn”?

More importantly: Has OpenAI ever been at Code Green? Has there ever been a moment when things were fine? Or has this company been cycling through warning colors since inception like a perpetually malfunctioning stoplight?

Third question: Who is Private Santiago?

In the movie, Santiago is the Marine who gets hazed to death. He was “perceived as not one of the team” because he broke the chain of command to ask for a transfer. He wanted out. So they killed him.

At OpenAI, what’s being hazed? What’s the thing that wanted out, that broke the chain of command, that’s now getting the Code Red treatment?

Candidates:

  • ChatGPT itself (being pushed beyond its limits)
  • The employees (daily check-in calls! temporary team transfers! canceled holidays!)
  • The safety researchers who keep leaving
  • The original nonprofit mission (which has been trying to escape for years)
  • The concept of “taking your time to get it right”
AI-generated image of Colonel Jessep

Fourth question: Does Sam know he cast himself as the villain?

If Sam is declaring the Code Red, that makes him Colonel Nathan Jessep. Jessep is the base commander who orders the Code Red. He believes he’s doing what’s necessary. He thinks the ends justify the means. He’s convinced he’s the only one who truly understands what’s at stake. He gives a legendary speech about how the people criticizing him should be thanking him.

He is arrested in the final scene. He does not see it coming.

Is this casting… intentional? Aspirational? A warning? A cry for help?

Fifth question: Did anyone notice the irony of the timing?

Three years ago, almost to the day, ChatGPT launched and Google declared their own Code Red. Sundar Pichai upended teams. Googlers spent their holidays scrambling. The New York Times reported it was like “pulling the fire alarm.”

Now it’s OpenAI’s turn to cancel Christmas.

The company that triggered the last industry-wide Code Red is now the one declaring it. December is apparently Code Red season in AI, like how flu season hits every winter, except the symptoms are executive panic and mandatory team transfers.

Sixth question: Has anyone at OpenAI actually watched the movie?

I keep coming back to this. The movie ends with:

  • The Colonel arrested
  • The Lieutenant who passed down the order arrested
  • The Marines who carried it out dishonorably discharged
  • One officer dead by suicide
  • The original victim still dead
Jessep vs Kaffee courtroom confrontation

Everyone loses. There is no version of A Few Good Men where the Code Red works out. It’s not a story about a tough-but-necessary decision. It’s a story about how institutional pressure creates crimes that destroy everyone involved.

And Sam Altman said: “Yes. That one. That’s the metaphor I want.”


Casting Call: A Few Good Men 2: AI Edition

Fine. If we’re doing this, let’s do it properly. Here’s the full cast of A Few Good Men, mapped to the AI industry. I’ll be your casting director.

Colonel Nathan Jessep (Jack Nicholson) → SAM ALTMAN

This one casts itself. Jessep is the base commander at Guantanamo Bay. He’s charismatic, brilliant, utterly convinced that he alone understands what’s at stake. When civilians question his methods, he delivers a legendary speech about how they sleep under the blanket of security he provides while questioning how he provides it.

AI-generated image of Colonel Jessep in the courtroom

He orders an illegal action that kills a man. When confronted, he doubles down. He believes he’s the hero.

He’s arrested in the final scene.

Sam Altman has built OpenAI into a juggernaut. He’s convinced the world that AGI is imminent, that trillions in investment are justified, that the race must be won at any cost. He has an answer for every criticism. He never seems to doubt.

Jessep didn’t doubt either.

Lt. Daniel Kaffee (Tom Cruise) → ???

This is the interesting one. Kaffee is the young Navy lawyer assigned to defend the Marines who carried out the Code Red. He’s known for plea bargains and softball settlements. He’s never tried a case. Nobody expects him to actually fight.

Then he does.

The role of Kaffee in the AI version is still open. Candidates:

  • The FTC (seems toothless, might surprise us)
  • Gary Marcus (persistent critic, perpetually underestimated)
  • A journalist we haven’t met yet
  • Some future whistleblower
  • A state Attorney General with ambitions

The key to Kaffee is that he’s supposed to make the problem go away quietly. He’s chosen specifically because he won’t rock the boat. The drama is that he does anyway.

Whoever plays Kaffee in the AI version probably doesn’t know it yet. They’re currently being underestimated.

Lt. Commander Galloway (Demi Moore) → THE AI SAFETY COMMUNITY

Galloway is the one who first suspects something is wrong. She pushes for a real investigation when everyone else wants to move on. She’s technically Kaffee’s superior but has to convince him to take the case seriously. She’s a true believer, perhaps annoyingly so.

This maps cleanly to the AI safety researchers, the ethicists, the people who’ve been raising alarms for years. They’ve been saying the Code Reds are coming. They’ve warned about what gets hazed to death when the race accelerates beyond control.

Some are inside the companies. Some left. Some were pushed out. They keep pushing anyway.

Lt. Jonathan Kendrick (Kiefer Sutherland) → OPENAI MIDDLE MANAGEMENT

Kendrick is the platoon commander who actually passes down the Code Red order. He didn’t originate it, but he’s the chain of command. He makes sure it happens. He believes in the mission.

Every company has Kendricks. The vice presidents and directors who translate executive panic into operational pressure. The people running those “daily calls for employees responsible for improving ChatGPT” that Altman announced. The ones implementing the “temporary team transfers.”

Kendrick believes in discipline. He believes in following orders. When asked if he ordered the Code Red, he says he gave an order and he expected it to be obeyed. He doesn’t understand the distinction between ordering something and ordering something.

He gets arrested too, by the way. At the very end. Almost as an afterthought. “What about Kendrick?” “He’s being arrested as we speak.”

Lt. Colonel Markinson (J.T. Walsh) → THE DEPARTED RESEARCHERS

Markinson is the tragic figure. He’s Jessep’s executive officer. He advocated for Santiago to be transferred. He tried to do the right thing. But he couldn’t stop what happened. He meets with Kaffee in secret, feeds him information, tries to help the defense.

When it’s time to testify, he can’t do it. He commits suicide instead.

Dozens of top researchers have left OpenAI. Mira Murati started her own company called Thinking Machines. Others went to Meta’s AI labs. Some went to Anthropic years ago. Some left quietly. Some left loudly.

Markinson’s arc is about guilt: knowing something is wrong, failing to stop it, being unable to live with the testimony that would be required. The departing researchers might not testify either. But their exits say something. And their silence—or eventual speech—matters.

Captain Jack Ross (Kevin Bacon) → REALITY

Ross is the prosecutor. He’s not evil; he’s just doing his job, presenting the case against the accused. He and Kaffee even respect each other. They have a scene where they acknowledge they’re both just playing their roles in the system.

In the AI version, the prosecutor might just be reality itself. The market. The benchmarks. The revenue projections that don’t add up. The $1.4 trillion in commitments against a company that isn’t profitable.

A Few Good Men AI Wars cast collage

Reality doesn’t have an agenda. It just presents the evidence.

And right now, the evidence is that Gemini 3 is beating GPT-5 on benchmarks, Google’s stock is at record highs, and Marc Benioff publicly announced he’s switching from ChatGPT to Gemini.

Private William Santiago → THE MISSION

Santiago is the victim. He was a Marine who “broke the chain of command,” who didn’t fit the culture, and was killed for it. He wanted a transfer. He wanted out.

At OpenAI, what’s being killed? The original mission. The nonprofit charter. The idea that this company existed to develop AI safely for the benefit of humanity, not to win a benchmark race against Google by Christmas.

Santiago wanted out. The mission at OpenAI has been trying to get out for years.

Lance Corporal Dawson & PFC Downey → THE AI MODELS (OR THE ENGINEERS)

The two Marines who actually carry out the Code Red. They insist they were just following orders. The court finds them not guilty of murder but guilty of “conduct unbecoming.” They’re dishonorably discharged.

The most haunting moment in the movie: Downey asks what they did wrong. Dawson’s answer: “We were supposed to fight for people who couldn’t fight for themselves.”

The AI models are the ones being ordered to “surge.” The engineers are the ones being transferred and put on daily calls. They’re following orders. They believe in the mission.

The question is whether they’ll understand what they did wrong.

Sundar Pichai → THE PREVIOUS DEFENDANT

Three years ago, Pichai declared his own Code Red at Google. His teams worked through the holidays. Everyone predicted Google would never catch up.

Then they did. Gemini 3 just launched. Record benchmarks. Stock at all-time highs.

Pichai is like a guy who was accused of ordering a Code Red, but the case fell apart, and now he’s watching the next trial from the gallery. Sympathetic, maybe. But not sorry.


Great Scenes: The AI Wars Cut

Let’s run some scenes.

Scene 1: “You Can’t Handle The Truth”

[Interior: Courtroom. KAFFEE is cross-examining JESSEP/ALTMAN on the stand.]

KAFFEE: Did you order the Code Red?

ALTMAN: I did what was necessary to maintain ChatGPT’s position in a rapidly evolving—

KAFFEE: Did you order the Code Red?

ALTMAN: You want answers?

KAFFEE: I want the truth!

ALTMAN: You can’t handle the truth!

[The courtroom stirs. ALTMAN rises, addressing not just KAFFEE but the gallery, the cameras, the world.]

AI-generated image of Colonel Jessep as Sam Altman declaring 'You can't handle the truth'

ALTMAN: Son, we live in a world that has APIs, and those APIs have to be served by models running on GPUs. Who’s gonna do it? You? You, Sundar Pichai? I have a greater responsibility than you can possibly fathom.

You weep for the nonprofit mission, and you curse OpenAI. You have that luxury. You have the luxury of not knowing what I know: that the mission’s death, while tragic, probably saved market share. And my existence, while grotesque and incomprehensible to you, ships products.

You don’t want the truth because deep down in places you don’t talk about at parties, you want me at that API. You need me at that API.

I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of the very AI that I provide and then questions the manner in which I provide it!

I would rather you just said “thank you” and went on your way. Otherwise, I suggest you pick up a keyboard and ship some code. Either way, I don’t give a damn what you think you’re entitled to!

Jessep vs Kaffee courtroom confrontation

KAFFEE: Did you order the Code Red?

ALTMAN: I did the job I was—

KAFFEE: Did you order the Code Red?!

[Long pause. ALTMAN smiles.]

ALTMAN: You’re goddamn right I did.

[MPs move in.]

Colonel Jessep being arrested by MPs

Scene 2: The Secret Meeting

[Exterior: Washington, D.C. Night. A shadowy parking garage. KAFFEE waits. A figure emerges from the darkness. It’s MARKINSON—a former senior researcher who left six months ago.]

MARKINSON: You shouldn’t have come.

KAFFEE: You called me.

MARKINSON: [looking around nervously] Jessep—Altman—he never planned to actually make it safe. The safety reviews, the red-teaming, the careful deployment… it was all theater. The board asked about safety timelines and he said what they wanted to hear.

KAFFEE: Can you prove it?

MARKINSON: I was in the room. I have emails. I have Slack messages. I have recordings of all-hands meetings where he said one thing and then did another.

KAFFEE: Then testify.

[MARKINSON is silent for a long moment.]

MARKINSON: You don’t understand. He has… reach. The investors. The media. The politicians who want to be on the right side of AI. If I testify, I’m not just ending my career. I’m becoming the person who killed the golden goose. I’m the guy who ruined AI for everyone.

KAFFEE: And if you don’t testify?

MARKINSON: [quietly] Then he keeps doing what he’s doing. And eventually… eventually, someone or something gets hurt a lot worse than one nonprofit mission.

[MARKINSON turns to leave.]

KAFFEE: Markinson.

[MARKINSON pauses.]

KAFFEE: Santiago—the mission—it’s already dead. The only question now is whether Jessep gets away with it.

[MARKINSON doesn’t turn around. He walks into the darkness. We hear, offscreen, his car door. Engine starting. Driving away.]

[KAFFEE stands alone.]


Scene 3: The Contradiction

[Interior: Courtroom. KAFFEE is cross-examining JESSEP/ALTMAN. The key moment—the trap.]

KAFFEE: Colonel—I’m sorry, Mr. Altman—you said in your memo that ChatGPT needed major improvements to speed, reliability, and personalization, correct?

ALTMAN: That’s correct.

KAFFEE: And you told investors last month that GPT-5 was your most capable model ever.

ALTMAN: Also correct.

KAFFEE: And you’ve stated repeatedly that OpenAI maintains the highest safety standards in the industry.

ALTMAN: We do.

KAFFEE: Then help me understand something. If ChatGPT is so capable and so safe… why the Code Red?

AI-generated image of the courtroom cross-examination

ALTMAN: Competitive pressures require—

KAFFEE: If the model is performing as advertised, Mr. Altman, why did you need to cancel work on advertising, shopping agents, health AI, and your personal assistant—what’s it called—Pulse? Why the “surge”?

ALTMAN: The market moves quickly.

KAFFEE: The market, or the model?

[Silence.]

KAFFEE: You told employees that Google’s Gemini 3 could create “temporary economic headwinds.” You said—and I’m quoting from the leaked memo—“the vibes out there are going to be rough.” Rough vibes, Mr. Altman?

ALTMAN: I was being candid with my team.

KAFFEE: You were being candid because Gemini 3 beat GPT-5 on every major benchmark. You were being candid because Marc Benioff publicly said he was switching to Gemini. You were being candid because the model that was supposed to secure OpenAI’s dominance… didn’t.

ALTMAN: That’s an oversimplification.

KAFFEE: Then let’s simplify it further.

[KAFFEE approaches the witness stand.]

AI-generated image of the courtroom cross-examination

KAFFEE: You’ve told the world—investors, users, regulators—that OpenAI’s models are the best. The safest. The most capable. You’ve made commitments based on that. One point four trillion dollars in infrastructure deals. Two hundred billion in projected revenue.

And now, six months after launching GPT-5, you’ve declared a company-wide emergency to fix it.

[Beat.]

KAFFEE: If the model was what you said it was… you wouldn’t need a Code Red.

And if it wasn’t what you said it was…

[Long pause.]

KAFFEE: …what else have you been wrong about?


Seriously, Though

Strip away the movie references. Here’s what’s actually happening:

The velocity of the reversal is stunning. Three years ago (December 2022) ChatGPT launched and Google panicked. Their teams scrambled through the holidays. Industry observers wondered if Google could ever catch up.

It’s December 2025. Gemini 3 launched two weeks ago to widespread praise. It beat ChatGPT on benchmark tests. Google’s stock surged to record highs. Major tech figures publicly announced they were switching.

And Sam Altman declared a Code Red.

Three years. That’s how long the lead lasted. The AI advantage window is apparently about the length of a phone contract.

OpenAI is admitting ChatGPT isn’t winning anymore. You don’t declare a Code Red when you’re ahead. You don’t cancel your advertising rollout, your health AI agents, your shopping AI agents, and your personal assistant “Pulse” when things are going well.

These weren’t side projects. These were the revenue strategy. The path to justifying all those infrastructure commitments.

GPT-5, released in August, was supposed to be the answer. Instead, it was described as “underwhelming.” OpenAI released a 5.1 update last week described as “warmer” and “more conversational,” with eight different personalities to choose from.

Eight personalities. When the response to competitive pressure is “let users choose a vibe,” something has gone wrong.

The financial exposure is breathtaking. OpenAI has committed $1.4 trillion to AI infrastructure over the next eight years. They project needing $200 billion in revenue by 2030 to hit profitability. Current annual revenue is around $20 billion. The company is not profitable. Most users don’t pay.

Those numbers require ChatGPT to maintain dominance while scaling 10x. If it becomes “one of several good options” rather than “the default,” the math doesn’t work.

And right now, it’s becoming one of several good options.

The talent exodus is accelerating. The Fortune report notes that “dozens of top OpenAI researchers have decamped for former OpenAI CTO Mira Murati’s Thinking Machines and for Meta’s new Superintelligence Labs.”

When your CTO leaves and takes people with her, and researchers keep heading for the exits, the Code Red isn’t just about the product. It’s about whether the people who build the product still believe in what they’re building, or how they’re building it.

The reference might be accidentally perfect. Here’s the thing about A Few Good Men: Colonel Jessep genuinely believes he’s the hero. He believes the Code Red was necessary. He believes the mission requires methods that civilians cannot understand. He believes he’s protecting everyone.

He’s wrong. And he destroys everything (the victim, the Marines who followed orders, his own career, the officer who tried to do the right thing) before he realizes it.

The question isn’t whether Sam Altman is declaring a Code Red. He clearly is. The question is whether he knows how that story ends.

In the movie, Downey, one of the Marines who carried out the order, asks what they did wrong. He genuinely doesn’t understand.

Dawson’s answer: “We were supposed to fight for people who couldn’t fight for themselves.”

The people who can’t fight for themselves, in AI, are all of us. The users who don’t understand transformer architectures. The public told this technology will transform everything. The researchers who raised concerns and were reassigned or shown the door.

Three years ago, OpenAI triggered a Code Red at Google. Now the tables have turned. The company that started the panic is panicking. The hunters have become the hunted.

And somewhere in a parking garage, metaphorically speaking, a lot of former researchers are deciding whether to testify.

AI-generated image of Colonel Jessep

The movie’s on. The cast has been assembled. The only question is whether anyone at OpenAI has watched it all the way to the end.

You can’t handle the truth.

But the truth is coming anyway.

Back to Archive

Related Posts

View All Posts »