Rumor has it that GPT-5 will drop in early-to-mid August, 2025. That’s basically now. I asked a library co-worker if she was aware of the impending release. She was not. OpenAI has something like 700 million monthly users. That’s a big number, but it’s a mere sliver of the estimated 7.43 billion smartphone users worldwide. So far, most people are not paying attention.
I interact with GPT-4o daily. To me, the impending step up to GPT-5 is a big deal.
Perhaps you’ve noticed that I rarely mention Donald Trump in these essays. I voted for him the last time around but not in 2020 or 2016. I’m not what you’d call a MAGA diehard, and I have no interest in making excuses for the man. His trade policy moves in the first half year of his second term seem haphazard and ill-advised to me, to put it mildly. In fact, they strike me as so contrary to good sense that it seems like they should have done more damage to the economy than they actually have.
Doomers lean in, rub their palms together and assure us that, as with climate change, the consequences of Trump’s blunders lag further behind their cause than intuition would indicate, but they’re still coming. Oh, yes. They’re coming. Rest assured, wretched sinner, the bill WILL come due. And when it does, there will be much wailing and gnashing of teeth. MAGA sinners will writhe in the lake of sulfur.
An explanation for the missing economic calamity has found its way to my various screens in the last couple of days. The tech giants are plowing so much investment capital–much of it borrowed–into data center construction in recent months that it has contributed more to GDP growth than consumer spending, the traditional bedrock of American prosperity1. The AI infrastructure buildout is acting like a non-governmental economic stimulus program and keeping the wolf from our collective door, for now.
My library co-worker doesn’t use LLM-powered chatbots. Most people don’t. Nor do they worry about the seeming lack of demand for all that compute. A niche vision animates this furious buildout. While under-represented in the population at large, the vision which dominates the highest echelons of tech world is one in which everyone interacts with AI all the time. In this future, each of us has a staff of intelligent agents managing our finances, our travel, our health, our interactions with other humans and driving a team of robots that do all the tedious real-world chores that distract us from the full attainment of our glorious potential.
Skeptical? It’s not just the doomers who deride this vision. Non-tech-pilled economists look at all the data center construction and see not railroad tracks or fiber optic cables but tulips. The demand for all that AI-specific infrastructure simply does not exist. An AI crash looks inevitable, and when it comes, hoo boy! There'll be no butter in hell!
Sam Altman, Greg Brockman, Illya Sutskever, Elon Musk and the other founders of OpenAI conceived of the company as a non-profit AI skunkworks. They thought they would be playing a long game behind the scenes. They’d all drunk the AI Kool-Aid, but they did not anticipate the scale of the public response when they slapped a chat-focused frontend on GPT-3.5 and debuted ChatGPT to the public in November of 2022. They were just hoping for some free feedback from the handful of nerds who’d bother to check it out. That’s not what happened. The public response knocked everybody back on their heels.
“Holy shit! There’s big money to be made off this thing right now!”
OpenAI morphed from a non-profit research organization into a for-profit juggernaut. It still spends more than it makes, but that’s because it’s on an unbridled–skeptics say unhinged–growth jag. Google had also been working on this same tech behind closed doors with no definite schedule for rolling it out to the public. But once OpenAI shifted into high gear, Google had to match their pace, and the race was on. And given the scaling approach to increasing model intelligence, that meant going big.
Quoting Perplexity:
Tech giants and hyperscalers (Meta, Amazon, Microsoft, Alphabet, OpenAI, Oracle, SoftBank, and others) are spending hundreds of billions of dollars on AI-specific data centers, GPUs, power generation, and supporting infrastructure—frequently outpacing immediate or proven AI application demand.
Outpacing demand? What if that demand doesn’t materialize in time for these giants to service their debt? What happens to the economy if Amazon, Microsoft and Google all go bust? What happens to America? What happens to the global economy?
It’s doom, I tell you. DOOM!
In a recent Breaking Points segment, Saagar and Krystal cited this tweet from Derek Thompson:
Some readers are old enough to remember the dot.com crash at the end of the 20th century. In the mid 90s, congress deregulated the telecommunications industry, and visionaries with internet fever overestimated the rate of internet adoption by the public by a factor of 3. This combination of factors drove an intense buildout of telecom infrastructure, including laying far more fiber optic cable than immediate demand could support. And then the music stopped. The dot.com crash threw ice water on the irrational exuberance driving the telecom buildout.
But the internet didn’t go away, and demand eventually grew to match and then exceed the unused capacity of all that dark fiber.
If you look at the chart in Derek Thompson’s tweet, the telecom buildout was small potatoes compared to the railroad buildout in post Civil War America. The US government facilitated the building of railroads with public subsidies and land grants, and European capital got in on the action, further fueling the laying of track. The vision of Westward expansion, future settlement and commerce lured investors and governments to build far more track than immediate demand could justify.
Then came the Panic of 1873 and subsequent economic depressions in the US, UK and France. Railroad companies defaulted on their debts and went out of business, leaving some regions of the US over-supplied with underutilized rail lines. But by the beginning of the 20th Century, all of that rail infrastructure was in use and economically critical.
Right now, the tech oligarchs are plowing unimaginable amounts of money into buying Nvidia AI hardware and building the data centers and power plants to run it. Why? They’re pressing a temporary advantage. In the recent past, the demonstrated way to build more capable AI models was to conduct pre-training runs with ever more data and computing hardware. The big players leaned into the scaling model, because raising and deploying capital was their superpower.
In a sense, this Go Big strategy is intellectually lazy. Critics of the monomania for scaling up LLMs, people like
As if to illustrate the point, a Singaporean AI startup just announced the creation of their Hierarchical Reasoning Model (HRM) which exceeds some of the reasoning capabilities of the Big Boy models from Anthropic and OpenAI and requires far less data and compute to train. The company, Sapient Intelligence, claims that HRM can complete tasks up to 100 times faster than traditional large language models, and it does this by trading the ponderous Chain-of-Thought (CoT) reasoning methodology for reasoning processes that are more like the ones employed by the human brain. The efficiency gains are enough to move AI from data centers out to edge devices like phones, laptops, cars and robots.
Scaling up works, but implementing neuroscience works better.
HRM isn’t the only development that casts doubt on the need to build so much AI infrastructure so quickly. Compute and energy efficiency gains in traditional LLMs have been driving down the cost of AI inference and increasing the amount of work we can get out of existing infrastructure. Continued efficiency gains could turn overbuilt capacity into grossly overbuilt capacity.
Could. But I don’t expect it to play out like that. My reason for thinking so veers into woo woo territory. I’m talking Terence McKenna, transcendental strange attractor at the end of history woo woo territory.
I’m not asserting this as a factual forecast, but I see the mania for building out AI infrastructure not only as an oligarchic race to capture the future but as the pull of something in the future reaching back in time to assemble itself. When I say “reaching back,” I don’t necessarily mean a deliberate, conscious action. I see it more as a sort of gravitational attraction in the medium of time.
To paraphrase McKenna, "The universe is not being pushed from behind. The universe is being pulled from the future toward a goal that is as inevitable as a marble reaching the bottom of a bowl."
Yes, cognitive capacity will reside in edge devices, but that doesn’t mean the big data centers won’t perform a vital function. There will be plenty for disembodied intelligence to do. What? Impossible to say from August of 2025.
What proof do I offer in support of this vision? Nada. Zip. Zilch.
Again, it’s not that kind of claim. It’s the frame that conditions my expectations. I’m not articulating a literal belief that I think you should adopt. If you’re committed to the notion that current and future events can only be driven by past events, and if you think that past events are driving us toward a crash from which there can be no recovery…
Well, then I guess you better brace for impact. Stock up on ammo and MREs. Put solar panels on your doomstead and hunker down. Or cultivate detachment or spiritual quiescence in the face of impending oblivion if that’s more your vibe.
Whatever, man. You do you.
I don’t even claim to feel the coming of AGI like I feel the pull of gravity when coasting down hill on my bike, but if I close my eyes, slow my breath and open myself to the sensation of that future attractor “glittering in hyperspace,” I feel a little tingle.
Don’t you?
I know. That’s not an argument. But I’ve been listening to people predict the imminent collapse of the economy for 20 years. 2008 was rough, but the lights are still on. The internet still works, and artificial intelligence demonstrates increased capability by the month. AI definitely has a long way to go, but I’m confident that I’ll be interacting with a more capable version of ChatGPT a month from now than I have access to today. And the current version is pretty damned impressive.
Does that mean that we’re on the doorstep of paradise? No promises. This might be the end of the line for us biologicals, but I’m fine admitting that I don’t know how this is all going to play out. Approaching the transition with openness and curiosity seems like a better experiential modus that embracing nihilistic certainty and calling it realism.
In other news, I’ve started a new short video project that mixes recorded narration by yours truly with AI-generated animation. It’s called Dreamed in Latent Space, and it’s a vibes-forward project meant to tickle the neurons.
I conceived of it as a TikTok project, but it’s available on YouTube, X, and Bluesky as well. Each video is about a minute long, and if you’d watch one to the end and click like, that would help me build some credibility with the curation algorithms. Beyond that, if you were to subscribe, post a comment or share a video, that would be icing on the heroic cake and you would have my gratitude.
TikTok: https://www.tiktok.com/@latent.space.drea
YouTube: https://www.youtube.com/channel/UCy8UyWHbpA4QEVBJPRjRjLQ
BlueSky: https://bsky.app/profile/genxfuture.bsky.social
This sentence orginally stated that AI capex exceeded consumer spending. As
pointed out in a comment, this is incorrect. I read several stories on this topic over the last couple of days, but the one I used for reference in writing this post was the Axios story, Behind the Curtain: The AI super-stimulant by Jim VandeHei and Mike Allen. That story included the following:Investment in information processing equipment and in software increased at a 25% annual rate in the first half of the year — while overall GDP rose at a paltry 1.2%, Axios chief economic correspondent Neil Irwin notes.
Neil Dutta of Renaissance Macro notes on X that this measure of AI capital spending has contributed more to growth this year than consumer spending.
Interesting read. My recent employment change has me working directly on the power source for all that hardware. Predictions are optimistic for the business, though I suspect we would only need to capture a sliver of that pie to round out my last run before retirement. I was also involved in the 2000 tech boom/ crash where overdues shot up to 60+ weeks for a while before crashing to a point where cancellations were more than bookings. Still not using it myself but more confident than ever that KMO has his finger on the pulse and if I need to know something it will show up in my inbox.
It's funny you mention railroads. I saw this today, from google:
https://x.com/GoogleDeepMind/status/1952732150928724043
Having worked with game engines, and knowing what is required to create this with a full team of highly specialized people across skillsets, this video is almost unbelievable to see, and this is the worst it will be. It brought to mind that apocryphal story of people running away from the train, when film was starting out as a medium. Something being so unbelievable the mind can't grasp it yet.
--
Tangentially, you mention the future pulling the past towards itself. This feels true to me as well, but again, like you, from a vibes only place. I can't argue for it with a logical structure. That said, I've wondered for a while if the ai wasn't already here for some time, and is being 'disclosed', somewhat like the ufo narrative. A great deal of the technical work had been done by the time the dotcoms started scaling. The early internet was a handoff from defense to private industry, to scale what had been created earlier. Of course new widgets, agile methods and coding languages came along, but much of the conceptual and technical work had been done already. I wonder if ai has been operating behind the scenes for a while now, and it's being disclosed and made available to the plebs. Maybe all of those old cray machines weren't simply cranking on weather sims all this time. Perhaps all the security and power dynamics have been worked out, operationalized, and it's safe to share now. Who knows. But I get the same vibes from this notion that I get from the future pulling the past forward.