AGI and the Future of Civilization: The Systems-Level Response We Need

You have to build the brakes before you need them—and make them strong enough to hold.

agi and the future of civilization

There’s a moment in history that feels uncomfortably familiar right now.

It’s 1933. Leo Szilard, a Hungarian physicist and refugee from rising fascism, is walking the rainy streets of London. He reads a newspaper article about a breakthrough in nuclear physics. And then, with terrifying clarity, he sees it: a nuclear chain reaction. The unleashing of energy on a scale humanity has never before controlled.

Szilard doesn’t run to publish. He doesn’t celebrate his brilliance. He panics. He races to patent the discovery, not to profit, but to hide it. To keep it out of the wrong hands.

But even Szilard, as prescient as he was, underestimated the gravitational pull of human ambition. When it became clear that Nazi Germany might be pursuing nuclear weapons, Szilard pushed the United States to act first. He worked on the Manhattan Project, convincing himself that this was the lesser evil. Then, after Germany surrendered and the bomb kept barreling toward deployment in Japan, he tried to stop it. He drafted a petition to President Truman begging for restraint.

The petition was ignored. Hiroshima happened anyway. Then, the technology spread. More bombs. New bombs. Deadlier bombs.

This is the trap of knowing better. Szilard saw the danger. He tried to change course. But he could not build systems strong enough to slow the momentum. His knowledge alone wasn’t enough. His intentions alone weren’t enough. His warnings alone weren’t enough.

The Bet Behind AGI

Talk to the people building artificial general intelligence today, and you will find almost eerie echoes of Szilard’s world. They believe in the power of what they’re creating. Many of them are deeply thoughtful, even worried. They know AGI could spiral beyond human control. They say so, out loud.

But they are trapped in the same logic Szilard fell into: if they don’t build it, someone else will.

So they race forward, caught between fear of missing out and fear of what happens if they fall behind. And in doing so, they place two reckless bets. The first is that they will make so much money in the run-up to AGI that they will be insulated from its fallout. The second is that they will retain control over the intelligence they unleash.

Both are bets history warns against.

We’ve been here before. Szilard saw this movie play out. Once the race begins, it is near impossible to stop. Good intentions are not enough. Private warnings are not enough. You need systems strong enough to say no, even when every incentive screams yes.

What Real Control Requires

So, let’s be clear. If you were serious about keeping AGI under control, you wouldn’t just talk about safety. You wouldn’t create teams that write reports destined for internal folders and public relations brochures.

You’d build systems with teeth:

  • Structural independence: Safety teams answer to independent authorities, not to the CEO or growth leaders.
  • Binding authority: The power to delay, stop, or fundamentally alter deployment plans.
  • Aligned incentives: Rewarding safety work as much as speed and revenue.
  • Transparency and external accountability: Public disclosures. External audits. Independent oversight.
  • Redundancy: Multiple, overlapping layers of defense.
  • Legal enforcement: Laws, not just norms, that demand responsible behavior.
  • Cultural centrality: A shared belief, baked into the company’s DNA, that safety is not a drag on progress. It is progress.

This isn’t hypothetical. We know how to do this. We’ve built these kinds of systems before, in industries like aviation and nuclear energy. But they require real sacrifices. They slow you down. They distribute power. And they work only if you build them before you need them.

OpenAI: The Best Intentions, the Same Old Story

OpenAI, perhaps more than any other lab, started with a nod to this history. They capped profits. They spoke seriously about risk. They built a team called “Superalignment” tasked with solving the control problem.

But when it mattered, the system cracked.

  • The Superalignment team lacked the power to say no. Its leaders, Jan Leike and Ilya Sutskever, eventually resigned, with Leike lamenting that “safety culture and processes have taken a backseat to shiny products.”
  • When OpenAI’s board tried to remove CEO Sam Altman, citing concerns about the company’s direction, the backlash was immediate. Investors and employees revolted. The board was quickly replaced. Altman returned.
  • OpenAI reportedly pressured departing employees with restrictive exit agreements, prioritizing secrecy over accountability.

This is not a story of ignorance. It’s a story of inevitability. OpenAI saw the risks as clearly as Szilard saw the chain reaction. But like Szilard, they found that seeing the danger is not the same as stopping it.

The Trap of Knowing Better

Leo Szilard spent the last chapters of his life trying to design better systems. He lobbied for arms control, pushed for international cooperation, and switched fields entirely to biology—searching for ways to build systems resilient enough to handle human fallibility.

He understood something crucial: you cannot rely on individual wisdom to save you from systemic momentum. You need structural brakes. And you need them early.

That raises the uncomfortable question: if we know this, if we have historical warning after historical warning, why do smart, well-meaning people still fall into the same traps?

A psychologist might say it’s our brains betraying us. We are wired to prioritize immediate rewards over distant risks. Building the next great invention offers status, wealth, meaning. The dangers feel theoretical, far-off, someone else’s problem. And when competition heats up, that risk calculation warps even further. We rationalize. We tell ourselves, “If not me, then someone else, and they’ll be worse.”

A moral philosopher might add that this is a failure of systems ethics. Individuals caught in a reckless system tell themselves stories that make their choices feel noble. They say, “At least if I win, I can do it responsibly.” But the system doesn’t care about individual virtue. The system rewards speed and punishes hesitation.

The truth is: no one feels fully responsible. The danger comes from the momentum of the system, not any single choice within it.

That’s why Szilard tried to change the system itself. That’s why OpenAI’s failure is not about bad people making bad choices. It’s about smart, thoughtful people caught in a system that moves faster than their wisdom can keep up.

The lesson Szilard left us is clear: you have to build the brakes before you need them. And you have to make them strong enough to hold, even when everything else tells you to let go.

I have been asked whether I would agree that the tragedy of the scientist is that he is able to bring about great advances in our knowledge, which mankind may then proceed to use for purposes of destruction. My answer is that this is not the tragedy of the scientist; it is the tragedy of mankind. —Leo Szilard

Our Newsletter
AGI and the Future of Civilization: The Systems-Level Response We Need
Vote for us!
Search NOBL