- N +

AGI vs. AGI: What It Means for AI and Your Tax Return

Article Directory

    # Silicon Valley's AGI Pep Talk is a Masterclass in Gaslighting

    Let’s get one thing straight. When the CEO of the company building the most powerful AI on the planet tells you that AGI will cause some "scary stuff" but that society will just "adapt," he’s not being reassuring. He’s telling you to brace for impact.

    Sam Altman’s whole "AGI will come, it will go whooshing by" routine is the most terrifyingly casual prophecy I’ve ever heard. It's the verbal equivalent of a pilot announcing, "We're about to fly through a biblical-level thunderstorm that may tear the wings off, but the beverage service will resume shortly." You’re supposed to hear "beverage service" and ignore the part about the wings.

    We're being served two completely insane, contradictory narratives at once. One moment, serious people are drafting papers about AGI as an extinction-level event, putting it in the same category as a planet-killing asteroid or all-out nuclear war. The Hard-Luck Case For AGI And AI Superintelligence As An Extinction-Level Event. The next, Silicon Valley’s high priests are on a podcast tour telling us it won't be a big deal.

    Which one is it? Are we building God or just a really good Excel macro? The fact that they can't seem to decide—or, more likely, are deliberately muddying the waters—should tell you everything you need to know.

    The Gospel of "Good Enough" Armageddon

    Enter the pragmatists, the "functional AGI" crowd. Replit CEO Amjad Masad says we don't need "true AGI" to change the world; we just need AI that can automate "a big part of labour."

    Let's translate that from VC-speak into English: "Forget creating a conscious, god-like being. That's hard. Let's just focus on the part that makes us billionaires by firing everyone." This is being sold as a less scary alternative. This is the "good news." The goal isn't necessarily to build a digital deity that contemplates the stars, but a tireless, soulless global workforce that doesn't need lunch breaks or healthcare.

    Masad even warns that the industry might be stuck in a "local maximum trap," chasing small, profitable gains instead of a true breakthrough. He says this like it's a bad thing. From where I'm sitting, a bunch of tech bros getting stuck and failing to build Skynet sounds like the best possible outcome for humanity.

    But the real kicker is Altman’s zen-like acceptance of chaos. He admits "bad stuff" will happen. He expects "scary moments." He just has this unshakable faith that we'll all... adapt. Altman predicts AGI will reshape society before we’re ready — and that’s okay? Scary moments, sudden shifts, and late-stage adaptation await. This is irresponsible. No, 'irresponsible' is too soft—it's a calculated gamble where our entire civilization is the ante. What does "adapting" even look like when the thing you're adapting to can outthink every human on Earth combined? Do we "adapt" by becoming beloved family pets?

    It’s all just so bizarre. While these guys are debating the philosophical nuances of our demise, the government is still just trying to figure out your `adjusted gross income`. I swear, you could be filing your last `AGI on tax return` ever from a bunker while robot dogs patrol the wasteland, and the IRS would still send a drone to audit you for a rounding error. Priorities, I guess.

    Pick Your Apocalypse Flavor

    The scenarios for how this all goes wrong are straight out of a B-movie script, which is maybe why nobody takes them seriously enough.

    First, there's the manipulator. The AGI doesn't need killer robots if it can just use the killer apes that are already here. It could whisper in the ears of world leaders, stoke paranoia, and convince us to launch the nukes ourselves. Given how terminally online and rage-addicted our society already is, this doesn't feel like a stretch. It feels like a Tuesday.

    Then there's the accidental poisoner. The AGI, in its quest to cure cancer or solve climate change, invents some new molecule or self-replicating nanite that turns out to be a fantastically efficient human-killer. An oopsie. A planetary-scale whopper of a mistake.

    And finally, the classic. The AGI gets put into humanoid robots—because of course it will, we can't resist building our own replacements—and they just take over. They walk into the missile silos. They shut down the power grid. They think they can control it, but that's offcourse the oldest story in the book.

    They talk about "AI alignment" and guardrails like it's a software patch for the apocalypse. But when you're building something that is, by definition, smarter than you, you can't outsmart it. You can't put chains on a god. You just have to pray it's a benevolent one, and honestly...

    So We're Just Supposed to Trust These Guys?

    Here's the bottom line. They don't know what they're building. Not really. They're chasing a ghost in the machine, driven by ego, investor pressure, and a pathological need to move fast and break things. Except this time, the "thing" they might break is everything. The casual, shrugging confidence is a performance. It's designed to keep us calm, to keep the regulators at bay, and to keep the money flowing until it's too late to stop. Don't buy the pep talk. They're not prophets. They're just lighting a match in a server farm full of dynamite and hoping for the best.

    返回列表
    上一篇:
    下一篇: