It’s happened again – some AI hype bro wrote the latest missive that has everyone agog. Matt Shumer wrote a lot of breathless words to basically say that “AI is coming for all yer jobs! Fear!!!!!!” of which we get several variations in every given year ever since ChatGPT hit the tech landscape in 2022. I won’t give him the dignity of a link, because that’s what he wants, but if you search for his name, you’ll see his original and the many responses that have made their way through myriad media outlets, both tech-centered and non-tech. When I first read it, I was reminded of those chain emails forwarded by your least favorite aunt or uncle that was usually a front for some MLM scam with the intent of fleecing scared people of their hard-earned money. Lo and behold, it turns out that Matt Shumer has himself been credibly accused of fraud in the recent past, so he really has no credibility to warrant the level of attention paid to him.
The first thing to understand about AI Hype and AI Doom is that they are opposite sides of the same coin: vast overstatements and exaggerated extrapolations of our present reality. The only functional difference is that the hypesters want us to buy in to the concept of AI utopia and the doomers want us to fear the dystopia of a future skynet that decides humans are a disease to be removed from the world.
The 2nd thing to understand is that as far as the technology goes, we are in a moment of transformation, similar in scope to that of the emergence of the internet and smart phones. Let’s not forget that both of those developments removed a fair number of jobs from the world. One example brought up by Marco Rogers was paper maps: there’s not much of a market for people who create and sell paper maps anymore. Agentic automation (the word AI is now functionally useless) will have similar repercussions, and I have no doubt that a number of jobs that exist today will not in the near future. As an aside, if I were someone whose job contains the words “software tester” I would be busy reskilling myself right now.
The 3rd thing to understand is that every great con artist knows how to latch on to and exploit kernels of truth. The truth is that we are actually in a moment of tech transformation. The truth is that some number of people will lose their jobs. But to then extrapolate and claim that some 50% of jobs will be gone by 2030 is, to put it kindly, baseless horseshit.
And the 4th thing to know is that each iteration of this type of AI hype is rife with unverifiable claims and baseless conjecture. We see the same patterns from Sam Altman, Jensen Huang, Dario Amodei, and every other person with a vested interest in the proliferation of this point of view. You will note that all of these are men, which I’ll delve into further down the page (hint: testosterone plays a role). Also of note is that every AI company, with the exception of hardware companies who will gladly ship high priced, premium products to AI companies, is losing massive amounts of money and taking on massive debt. When viewed through this lens, the AI hype missives smack of desperation, hoping to keep the hype alive for an industry drowning in debt.
For a more sober account of what is happening industry-wide, I highly recommend you read Peter Girnus, a security researcher. And for a funny takedown of the “AI is sentient” claptrap, definitely read his account of how he trolled an AI agent social network.
The Limits of Human Psychology
Shumer started his essay (I use “started” and “essay” generously, as I strongly suspect most of it was Claude prompted) with an analogy to COVID in February, 2020, when COVID was something most of us had heard of but didn’t quite grasp just how quickly it was going to upend everyone’s world. I would like to choose a different analogy from recent history – the period from 1998 to 2008. In the late 90’s, the deregulation of finance, specifically the erosion of limits on investment banking, enabled the acceleration of complicated financial products, which caused many investors to believe that they had rewritten the rules of the new economy. Each successive blockbuster deal that made investors billions of dollars built an additional layer on the assumption that they had succeeded; that they were “the smartest guys in the room” who were going to remake society in their image. Until it all started to unwind in 2007. As the hype passed the peak and loans were called, the effect was akin to a rubber band snapping – sudden and irreversible. There were many studies conducted on this period of time, most of which focused on business decisions and how companies allowed themselves to uncritically follow the hype path and take on unsustainable risk. A few studies focused on the individuals that powered the hype and uncovered some interesting facts.
One of the questions postulated by these studies concerned the role of testosterone. It turns out that making successful trades that make lots of money give us massive hits of dopamine, which is highly influenced by testosterone. There was even a direct correlation between levels of serum testosterone and the degree of risk taking. Because of the dopamine “high” of these traders, the reward centers in their brains lit up, preventing them from thinking more critically. They became completely convinced of their invincibility and their own success, up until the moment it all came crashing down. These people – the traders at the center of activity – were the worst narrators of the moment, because they were completely invested in the pursuit of more chemical highs. I think something similar is happening with AI hype cycles. The more invested you are in AI, the more of a dopamine high you get when you (or your agents) successfully write a bit of code that does something useful. This leads to a positive feedback loop where the individual pursues ever more brain chemical highs, just like the investors mentioned above. You can see this play out where every pronouncement by Altman, Amodei, and others becomes progressively exaggerated and even divorced from reality. In fact, we are seeing the first studies that show a disparity between how productive LLM coders are versus actual productivity measurements. Choice pull quote from that article:
A rigorous study on AI coding productivity came from METR in mid-2025. Researchers ran a randomized controlled trial with experienced developers across 246 real-world coding tasks. The finding was stunning: developers using AI tools were 19% slower than the control group. The critical detail is that those same developers believed they were faster.
There are a few aspects of human psychology that make us particularly vulnerable to this type of feedback loop:
- We love to extrapolate from patterns – humans see patterns in everything. And when there’s not one there, we will make them up and “connect the dots” regardless of whether a connection exists. This explains why your drunk uncle at Thanksgiving takes great pains to tell you about how “it’s all connected, man!”
- We are uncomfortable with not knowing – it’s a whole lot easier to come up with some cockamamie story with named actors than to say “I don’t know” or to explain a calamity as an outcome of something as banal as incompetence. See drunk uncle, above
- We love to anthropomorphize everything – we assign human characteristics to almost everything in our lives, from pets to cars and houses… hell, we even anthropomorphized “pet rocks”. Appropriate pop culture reference: “She’s giving it all she can, cap’n!!!”
Combine all of these together, and it’s easy to see why we are susceptible to the AI hype train. Mix in the fear-mongering and desperation, and you get a perfect storm ripe for exploiting the moment and separating people and well-endowed institutions from their wealth.
Apologia and Desperation
There’s another historical analog that we can use to properly frame this moment: the apology, or apologia. An “apology” was written as a defense of someone or an idea against an accusation. In ancient Greece, this happened strictly in a legal context, but the concept has been extended more generally. Some ancient historical writings have been categorized as apologia, even though no accusation exists or survived history. One example of an ancient text that has been post hoc described as an apology is the biblical account of the rise of King David. When read as defenses of the idea or person, these texts become notable for what they don’t say, or how they soften the impact of negative events. In famous apologies, there are accounts of how the perpetrators engaged in unflattering behaviors, but there are always explanations for why the object of the accusation had no choice or committed his acts for the greater good, because you see, he had no choice and the outcome was inevitable. Missing from an apology is any direct mention of remorse or regret – it’s always an explanation. From reading an apology with no reference to an accusation, you can extrapolate what the accusation was by identifying what is being expained away or what is missing. This becomes a useful way to read ancient literature, because you generally surmise the motivation and intent of the text by critically assembling in your mind an outline of what the original accusations must have looked like.
In this framing, we start to see AI hype for what it is: an apology against the accusations. And what are the accusations? Even though Shumer (and Altman, et al.) never directly reference them, we can infer them based on what’s not in the writing or what is glossed over. Let’s summarize the accusation based on a critical reading: these large, high valuation companies are taking on unsustainable debt and are unable to justify the amount of money put in them. The results and outcomes, while positive, are nowhere near the billions of dollars in debt these companies have taken on. We have passed the point where these companies will produce a return on investments that will satisfy their investors. This tacit acknowledgement of these accusations leads to desperate attempts to justify their existence by over inflating their influence and value to the world. They are compelled to continue this hype spinning, because it’s all they can bank on.
LLMs cannot, in fact, “decide” to write better versions of themselves. Agentic tools will have a great impact and displace some jobs, most of them in tech, but they will not replace lawyers by 2030 or whatever incredible claims have been made. We will still need radiologists. The idea that we’re on the path towards machine sentience is a tale that has made the rounds in Silicon Valley for decades. And you can’t delve very deeply into the “singularity” movement without running into believers in eugenics and progenitors of TESCREAL.
If we look at the weight of all the evidence and make full use of our critical thinking skills, we can only arrive at one incontrovertible conclusion: these people are fucking nuts, and we cannot, must not, trust them.









