AGI Is a Rising Tide, Not a Moment

April 30, 2026

Ocean waves rolling onto a shore at sunset

I keep hearing the same question in different forms: "When will AGI arrive?" "Are we close?" "What's the timeline?"

The framing assumes there's a moment — a threshold, a press release, a model card. Some specific Tuesday in some specific year when we cross from "powerful AI" into "general intelligence."

I don't think it works that way.

AGI is a rising tide, not a finish line. What we're living through is a bar that keeps climbing — quietly, unevenly, but relentlessly — and at each level it absorbs another category of human work. The waterline has already passed most people. We just don't notice, because we keep redrawing the shoreline.

The bar moves faster than our definitions

The classic AGI pattern: someone writes down a definition. AI clears it. We declare the definition was never serious in the first place.

  • "AGI will pass a graduate-level exam." Done. Moved on.
  • "AGI will write working code from natural language." Done. Now we want it to write tasteful code.
  • "AGI will produce original research." We're staring at this one and pretending we're not.

Each goalpost was earnest when proposed. Each one fell. The honest reading isn't that AI keeps undershooting AGI — it's that the definition of AGI has been quietly chasing the model frontier upward, the way the horizon recedes when you walk toward it.

That's the first signal we're not waiting for a moment. We're swimming in one.

Wave 1: the work that already disappeared

The first wave is mostly behind us, and it's barely talked about because the work it touched wasn't glamorous to begin with.

  • Customer support tier 1. The "did you try restarting it" layer is largely automated. Real humans now mostly handle escalations.
  • Routine translation. Not literary translation — bulk translation, the kind that used to be a freelance economy.
  • Basic copy. Product descriptions, email blasts, ad variations. There's a whole industry of "generate 50 versions of this headline" that quietly evaporated.
  • Structured data entry. The cousin of customer support: turning unstructured stuff into rows.

These jobs didn't make headlines when they shrank, because the people doing them weren't writing essays about it. But the displacement is real, and it followed a clean pattern: AI takes the work that's high-volume, low-stakes, and pattern-heavy first.

The bar at this stage was roughly "be reliably mediocre at a narrow task." Not impressive. Just sufficient.

A headset and microphone in front of a laptop on a desk

Wave 2: the white-collar crunch

The bar is now somewhere in the middle of the white-collar pyramid, and you can feel the water rising in real time.

  • Programmers. Not "all programmers are replaced" — but the shape of the job has changed. Boilerplate, scaffolding, type wiring, refactors, test generation: the model does most of it. The senior engineer is still indispensable. The junior engineer is suddenly competing with a tool that doesn't get tired.
  • Finance. Modeling, comping, deck-building, due diligence summaries. The work that used to fill an analyst's first two years now takes a long afternoon.
  • Legal. Contract review, citation checking, brief drafts, discovery. The big-firm associate model assumes a pyramid of cheap labor doing rote work. AI eats the bottom of the pyramid first.

The pattern across all three: the judgment part is still human, but the production part is no longer the bottleneck. Which means the human role compresses upward — toward decisions, taste, accountability — and shrinks in headcount.

The bar at this stage: "be good enough at structured intellectual work that a human only needs to verify, not produce."

Wave 3: researchers, and the recursion nobody wants to name

The third wave is the strangest one, and it's already starting.

Every top lab — every single one — is using AI to train the next generation of AI. Not as a metaphor, not as a marketing line. Literally:

  • AI generates synthetic training data that humans can no longer match in volume or quality.
  • AI proposes architecture variations and ranks experiments.
  • AI writes the eval harnesses and grades the outputs.
  • AI debugs AI.

This is the recursion that the AGI debate keeps tiptoeing around. The most cognitively demanding job in the world — frontier ML research — is the first place where AI is being deployed as a peer collaborator, not a tool. Not because researchers are easy to replace, but because they're the people closest to the model and the first to notice it's good enough to help.

The bar at this stage: "be useful inside the loop that produces the next version of you." Once a system clears that bar, the curve starts bending in a different way. The thing improving the model is no longer just the humans.

A researcher in a white coat working at a chemistry lab bench

The gaokao test, and what it does to the conversation

In China, intelligence has a culturally agreed yardstick: the gaokao. One test, one score, one ranking against everyone else your age. It's reductive — and it's also widely accepted as a proxy for general cognitive horsepower.

If you take that yardstick seriously and apply it honestly, the answer is uncomfortable: today's frontier models would beat 99% of test-takers on most subjects. Not "could probably do okay." Sit at the desk, four hours, closed book — they would crush it.

By the metric a billion-plus people use to sort intelligence, the bar was cleared a while ago. Quietly.

I'm not saying the gaokao measures everything that matters. I'm saying: when the operationalized definition of intelligence in the world's largest education system has already been beaten, and the conversation is still "but is it really intelligent?", the conversation is the thing that's broken — not the bar.

This is the move we keep making. We pick a definition, AI clears it, and we say "ah, but real intelligence is something else." The shoreline keeps receding. The water keeps rising.

Why we keep moving the goalposts

I think there are three honest reasons we redraw the line:

  1. Identity. "Intelligence" is the trait we use to feel distinct from other species and from machines. Conceding the bar feels like conceding something about ourselves. Easier to redefine the bar.
  2. Specificity bias. Once you watch a model do something, you can see how it does it — and the seeing makes it feel mechanical. Anything we understand the recipe for stops feeling intelligent. (We did this with chess. We're doing it with code. We'll do it with research.)
  3. Sample bias. The people writing about AGI are mostly people in cognitively demanding jobs. The work that's already gone — call centers, copy shops, translation desks — wasn't their work. So in their lived experience, AGI hasn't arrived yet. From the desk of the displaced, it has.

None of these are bad-faith. They just explain why "AGI is coming" survives as a narrative even when the evidence keeps suggesting "AGI is already creeping past your shins."

What "already here" actually means

Saying AGI has arrived doesn't mean every job ends tomorrow, or that we've built the science-fiction version. It means something more boring and more important:

  • The capability frontier is no longer a question of "if." It's a question of "where."
  • The right metaphor isn't "the day AGI launches." It's "the year your particular waterline rises above your ankles."
  • Your timeline as a worker, an investor, or a builder isn't governed by a milestone. It's governed by which wave you're standing in.

If you're early in the curve (support, copy, basic translation), the water is at your waist. If you're middle (engineering, analyst work, junior legal), it's at your knees and rising fast. If you're late (research, taste-driven judgment, accountable decisions), you can still see the beach — but the tide is coming.

The mistake is waiting for an announcement. There isn't going to be one. There will just be more weeks like the ones we've already been having, where "I can't believe it can do that now" gets quietly internalized within a month.

Closing thought

AGI was never going to be a doorway. It was always going to be a tide. The question isn't whether it has arrived. The question is which of us is still pretending we can't feel the water.