Depiction of a "brain" consisting of connected circuits, depicted floating over a simulated circuit board

Oh to be back in the 19990’s and early 2000’s, when every technology was (mostly) viewed as apolitical and often as a force for good. Remember when connecting the world was seen as a universal good for the advancement of humanity? And then as the world wide web, mobile phones, social media, and other tech started to pervade all aspects of society, a few funny things happened. Tech entrepreneurs became billionaires, sometimes hundreds of times over. The politics of tech kept growing until it became just as or even more important than the tech itself. Social media algorithms were already surfacing as fundamental problems to be addressed. And then came LLMs and ChatGPT or “AI” as they’ve come to be known. For the remainder of this essay, “AI” will be shorthand for LLMS + GPTs. I see a direct lineage between social media and chatbots and even software coding agents – all of these technologies are designed to give humans dopamine hits such that they become addicted and come back for more. I don’t think this has been fully explored, and I want to point out the dangers of this pathway.

In a previous post, I posited that AI hysterics were dangerous and made a passing reference to the testosterone-dopamine pathway that was cited as one of the culprits of the great financial crisis. This is but one angle of critique. When it comes to AI safety and security, there are several vectors of criticism:

  • Environmental cost (water, energy, mining, carbon, et al)
  • Mass surveillance (facial and voice recognition, interconnected cameras, etc.)
  • Racism (inadequate scrutiny of data sources, weights, etc.)

But I don’t think I’ve seen enough criticism of the psychological cost of “AI”, and this cost comes in a few forms:

  • Reduced cognition and critical thinking
  • Increased dependency on automation
  • Shifting of risk outcomes (I’ll explain this one in more detail)
  • AI mania and even psychosis

I’ll go through each of these, but first I’d like to do a little context setting.

AI and the Attention Economy

Most of us forget that the fundamentals of what we call AI came from 2 sources: big data analytics and social media. With the ability to process large amounts of data came the ability to create recommendation engines, to do “sentiment analysis”, and to create ways that kept people engaged so that the Facebooks and Googles of the world could create evermore ways to print money. Those friend recommendations you get from Facebook and LinkedIn? Big data algorithms. The prioritized links in Google? Big data algorithms. Product recommendations from Amazon? You guessed it! Big data algorithms. For the last 20 years, a large segment of the technology industry has been focused on keeping people engaged and winning the “attention economy”. Such tech has been called “brain crack” that leads us down cognitive pathways we would not have otherwise gone down, feeding an addiction to social media to the point where people lose touch with reality and forget how to “touch grass”. Thus, it was inevitable that the industry would land on the ultimate addictive technology: LLMs, at first embodied by ChatGPT. These tools are geared to reinforce prior beliefs, inflating an individual’s sense of self and becoming positive feedback loops for whatever an individual was feeling at the time. This is why using them for therapy has been so disastrous. A bot designed to keep you coming back for more cannot be trusted to tell you what you need to hear as opposed to what you want to hear. Using these tools produces a dopamine high, even more than what participants feel through social media.

But the effects are not just limited to personal chats. It extends to productivity applications as well. Consider writing code. The promise of AI in its agentic productivity form is that it will automate all of your tasks. And in truth, these have proved to be highly valuable tools: witness the breathless hysteria that follows every new release of Anthropic’s Claude Code or OpenAI’s Codex. But I want to point out that just as with a hammer, all the world looks like a nail, so too does agentic engineering make all the world look like a software problem. And yet problems persist: Coding agents were shown to give developers the illusion of productivity. And apparently, 95% of agentic engineering initiatives fail to live up to their promised outcomes. AI is showing us in real time that coding was never that valuable to begin with, a point I made 6 years ago in the context of the 10x engineer. I’m not arguing against the potential power and impact of these tools. What I’m arguing is that the dopamine addiction that accompanies AI chat usage is just as powerful and addictive in productivity tools. In fact, it may be worse because technology practitioners tend to view their tools as non-political and devoid of cultural context.

To critically and fully evaluate the promise of these tools, we have to be able to look at outcomes objectively, divorced from the dopamine hit that comes from an initial high when you achieve a result so much more quickly than before. We also have to consider the possibility that perhaps behind forced to go slow, because doing these things was hard, prevented us from making stupid mistakes and gave us time to be more thoughtful. Consider the possibility that going slower was a feature, not a bug, but more on that later.

The Limits of Automation

There’s a very famous disaster that I like to point to when referring to the dangers of automation: Air France flight 447. There’s a lot that failed mechanically on that flight, but one thing that was very clear: when the plane dropped out of autopilot and handed the controls back to the pilots, they made very poor decisions. Automation is great. Everyone loves automation because everyone loves the idea of removing tedium from their daily lives: work, personal, and otherwise. So automation is great – until it isn’t. There is a very real concern that outsourcing more of the cognitive load will reduce your brain’s ability to think critically when needed the most – such as when the automation breaks and you will need to solve the problem yourself. High school and college educators, already concerned by the drop in cognitive ability brought about by social media and doom scrolling, are sounding alarm bells about “zombie” students addicted to ChatGPT and the like.

This brings us to an interesting – and concerning – paradox: as we are able to outsource and offload more and more cognitive tasks, do we accomplish less because we lose the ability to actively solve problems as we lose connection and ownership of outcomes? Have we already reached that point? These tools are very very good at producing competent products, whether long-form summaries, software, or tech services, or at least the appearance of competence. But what happens when we are unable to critically analyze the outcomes of the decisions we’ve outsourced to these tools? I get the sense that we’re about to find out shortly. The counter argument to all this is that these tools hand individuals more ability to think creatively, removing the drudgery and freeing our minds to focus on the more rewarding parts of our jobs. This seems plausible, but I think there are limits. In a recent Galaxy Brain podcast, Anil Dash compared and contrasted the impacts of AI on coding with that on writing and art. AI-assisted coding, according to Dash, was free of drudgery and allowed more creative expressions, whereas AI-assisted writing and art turned the creator into an editor. In other words, AI-assisted art was all drudgery no lift, whereas AI-assisted coding was a liberating experience.

Side Note: I expect this is true up to a point. For now, we’re only seeing the positive aspects of agentic engineering because we haven’t yet fully gone down the path of “engineering management” which is where this appears to be going. Will engineers really be singing the praises of AI when they realize they’re just middle management now? They still won’t have any real agency, but they’ll own the end product. But I digress…

But the question remains: if we agree that these tools are essentially purpose-built to form positive feedback reward engines for their users, where is the critical thinking for preventing mistakes going to take place? And I don’t mean mistakes like typographical or syntax errors. I mean things like enabling mass surveillance of particular races or ethnicities. Or creating financial services applications that reward and punish entire segments of populations. When we outsource so much of the cognitive load in these circumstances, how will we know when things are gone awry? These are not simply “bugs” that a code linter will catch. These are fundamental errors that will be expedited by our brave new agentic world, and we can’t guarantee that our practitioners or “agent managers” will have the know-how to prevent these outcomes or even detect them after release. The more cynical among us would argue that this is the point and the system is working as designed. I’m still holding out hope that the vast majority of people don’t actually want to be racist assholes.

Outsourcing more of the cognitive load will lead us to pay less attention to what is happening and have less understanding of these systems in general. This does not bode well when an increasing amount of our decisions will be agentic, from sources that are designed to maximize and reward our prior biases. Positive feedback loops are real. Confirmation bias is real. How do we prepare for a future where we’ve automated our mistakes and make them difficult to detect? The AI maximalist would argue that we create agents to challenge decisions from other agents. I can definitely see that future unfolding before our very eyes, and I’m going to express great skepticism as to its ultimate effectiveness. This sentiment was expressed well by Jasmine Sun in her essay “Claude Code Psychosis“. In it, she walked through her experience with Claude Code, noting its power and her new ability to create things that were previously not possible for her. But she also came to another realization: its use is primarily for “software-shaped problems” which, it turns out, are not actually the majority of problems we’re presented with in life. But that won’t stop your typical, self-described “10x engineer” from thinking in those terms. The more sophisticated these automation tools become, the more we anthropomorphize them, and the more we trust them with decision-making capability, which is not what they were created to do.

Shifting of Risk

What this means in real world terms is that we have to think about risk differently. It used to be that risk was something that could be quantified according to the quality of output and competence. Incompetent workers produced brittle, poorly performing products that would easily break and cause damage. Competent workers produced higher quality work that broke down less. Manufacturers like Toyota, which became famous for its mantra of continuous improvement, created systems and processes based on the notion of rewarding competence and preventing substandard work from being released to the public. And that is largely how we thought about systems and outcomes: did it break? Did it perform well? What could have been improved? And then loop that feedback into the system and make the next release incrementally better.

But what happens when the question of competence goes away, and the quality of a given product is no longer a concern? Do we assume it went well because it didn’t break? In the past, the assumption is that because humans were in control of decision-making, the risk of malformed products would be addressed upfront, before engineers ever got to work creating a product. In that world, there were many links in the chain that required human intervention where someone could point out fundamental problems before they went too far down the release path. We can all think of incidents where a product release became its own momentum and disaster resulted because no one was empowered to speak up. Now think about agentic systems with even fewer pauses in production and break points managed by humans. At what point do we realize that making things go faster will have the unintended side affect of allowing management’s mistakes to be unleashed on the world before anyone can stop it?

There is a case to be made that intentionally slowing down production could actually be beneficial. One of my favorite TV series is “The Pitt”. (streaming now on HBO Max!!!) In a recent episode, one of the characters could be heard uttering the phrase “slow is smooth, smooth is fast.” I was intrigued by that line and discovered that it originated from the Navy Seals. In the context of the show, this line was used to ensure that doctors were taking the time to do what is best for patients. Incidentally, The Pitt also has an interesting, nuanced take on the use of AI for productivity. Taking that line of thought to its logical end, we can intentionally gives ourselves more checkpoints to evaluate risk, and not just in terms of the quality of what is being released, but to evaluate the potential outcomes that will be the result.

AI Mania and Even Psychosis

Most of what I’ve written above has been included in a number of other meta analyses of AI in productivity tools. But the part that concerns me the most, even more than everything else above, is the affect that these tools have on the practitioners that use them, and I don’t just mean on cognitive abilities. Let’s talk about addiction. Let’s talk about mania. And let’s talk about how this affects our decision-making abilities. When you combine cognitive outsourcing, dopamine highs, and reduced critical thinking, things can go awry quickly. Ever since ChatGPT exploded on the scene in 2022, there has been a steady drumbeat of exaggerated claims of the capabilities of these models and agents, both pro and con. On the hype side, you have any number of AI company executives and tech futurists touting how we are on the brink of artificial general intelligence (AGI) and entering a new era of humanity, one with lots of leisure time because all the drudgery of labor will be done by machines, giving us more time to do… something something fulfillment and enlightenment. Ironically, those casting warnings of impending doom from AGI tout the technology in exactly the same terms. Except in their examples, the power of AGI is turned against us once the machines become sentient and decide that humans are surplus to goods.

Let’s be frank: these tools are powerful, and they are reshaping the tech industry at great speed. But I fear for the psychological impact that they seem to have on my tech brethren (and it is mostly brethren). I have a colleague who has described his recent foray down the path of agentic entineering in terms of lost sleep, increased anxiety, and his inability to relax. This is not a good outcome. Just as with social media and our children, I am growing increasingly concerned that using these tools breaks our brains. Tech people are in the habit of making fun of anti-vaxxers and other anti-science people, and the connections between those movements and social media are well established. What if we discover that we tech people, who love to pride ourselves on our ability to think rationally, are just as susceptible to the same kinds of incentives and rewards feedback loops that send our drunk uncle down conspiracy theory rabbit holes? And what if we discover that these agentic-induced manic episodes turn out to be just as dangerous, if not more, than those triggered by social media engagement algorithms? It could be that these are even more dangerous because we don’t expect productivity tools to be dangerous, and we don’t view their outputs as critically, especially not when we’re high on dopamine.

Speaking of dopamine… there is a large body of evidence that links testosterone levels, cortisol, and dopamine to risk-taking behavior. There’s an interesting common thread in the above narratives: the overwhelming majority are from men. This testosterone-dopamine pathway has been linked to the high risks taken by wall street traders and their consequences: the great financial crisis of 2008. The basic – and probably oversimplified version – is this: when we are rewarded for taking risks, we get a hit of dopamine which is a pleasurable experience. Testosterone can increase or induce the release of dopamine, which means that for those with higher levels of testosterone, the release of dopamine will also be higher, meaning that the pleasure centers of our brain get more excited when we are rewarded for risk taking. Much of the research I’ve seen online has been in the context of financial decisions and the links to the great financial crisis. But when I read the descriptions by wall street traders of the mania they would experience, it starts to sound awfully familiar to the type of mania I’ve heard described by AI practitioners. The need for less sleep. The feeling of additional energy and that nothing can touch you in these moments – that during these manic episodes they feel as though every decision they make and every idea they have is spectacular and world-changing. All of this is starting to sound very familiar. And when surrounded by tools that give you feedback almost instantaneously, that feeling of mania can be induced quickly, potentially causing the practitioner to develop an addiction.

This effect, which I’ll call AI Brain, would explain a lot. It would explain why the most hysterical proclamations are from men. It would explain why we get breathless accounts of amazing productivity, without very much real world impact. It would explain the study by METR on the “productivity illusion” of using AI coding tools. It would explain the MIT study that showed that 95% of AI initiatives in the enterprise failed. it would also explain the cognitive dissonance between the proclaimed advantages of using these tools and the actual real-world results. Lots of people are loudly saying that everyone needs to get onboard, but so far what I’ve seen is just more tools for creating other agentic tools. Taking a step back, it’s agents all the way down. To put it bluntly, I’ve yet to see a cure for cancer. Detection rates based on radiology images have not changed. Neither has surgical outcomes. Nor quality of artworks. Nor world-changing fiction. And not even replacements for our most used software tools. I suspect what will happen is that AI tools will become intrinsic in the production of all of those things, but as we’ve already seen, much is yet to be done to ensure reliability, resilience, and safety. In short, agentic tools do not help solving the human-shaped problems we’re confronted with, even if we are focusing on the software industry itself.

So What Do We Do?

The intent of this essay is not to disclaim the power of agentic tools. They are of course quite powerful. But we all remember the lessons from Spiderman, right? With great power comes great responsibility. We are going to have to rethink our approach to automation and really engineering in general. We are going to have to figure out how to insert checkpoints into our processes, because we can no longer take for granted that they will exist.

I think the best way to think of this comes from Anil Dash in the above-referenced Galaxy Brain podcast:

Okay, think about what could a good LLM be. “I want it to be environmentally responsible. I want it to have been trained on data with consent. I want it to be open source and open weight, so that technical experts I trust have evaluated how it runs. I want it to be responsible in its labor practices. Want it to—” Come up with a list, right? So there’s, like, four or five things. And if I can check all those boxes, then I could feel responsible about using it in moderation. And it’s only implemented in apps that I choose to have it in—not forced, like the Google thing where it jumps in front of my cursor every time I start trying to type or whatever. Like, that could be useful. And then I would feel like I was engaging with it on my own terms. That doesn’t feel like science fiction. That feels possible.

These tools are powerful, and they can have a positive human impact, if we choose to use them in that way. We don’t have to accept the inevitability narrative of “something big is happening” and “all your jobs are going away!!!” Denying the use of these tools is not the answer. Finding ways to prevent harm is the path forward.

I think we’ll find out that AI Brain is real, and it will be incumbent on us, the practitioners, to provide the critical view necessary to ensure that we don’t lose a generation to a dangerous positive feedback loop. Over the last decade, we’ve seen where that leads – fascism, anti-science, and polarization. Let’s not repeat our mistakes and make the problem worse.

Protester in a head covering faces a line of riot squad law enforcement and places a flower into one of the riot shields
(This was originally posted on medium.com)

I have been struggling recently with where to direct my focus and what I could write about that would add something material to the ongoing debates on “AI”, technology, and politics. Thanks to my friend Randy Bias for this post that inspired me to follow up:

Screenshot of Randy Bias post on LinkedIn “I notice that a lot of the open source world gets uncomfortable when I start talking about how geopolitics is now creating challenges for open source. I don’t understand this. It’s provably true. Even things at the margins, like the Llama 4 release, which is technically not ‘open’ has a restriction against EU usage. We *must* talk about the geopolitical realities and look for solutions rather than letting us be driven by realtime political trends…”

This post triggered a few thoughts I’ve been having on the subject. Namely, that open source was born at a time that coincided with the apex of neoliberal thought, corresponding with free trade, borderless communication and collaboration, and other naive ideologies stemming from the old adage “information wants to be free”. Open source, along with its immediate forbear free software, carried with it a techno-libertarian streak that proliferated throughout the movement. Within the open source umbrella, there was a wide array of diverse factions: the original free software political movement, libertarian entrepreneurs and investors, anarcho-capitalists, political liberals and progressives, and a hodgepodge of many others who came around to see the value of faster collaboration enabled by the internet. There was significant overlap amongst the factions, and the coalition held while each shared mutual goals.

From 1998, when the term “open source” was coined, until the early 2010’s, this coalition held strong, accomplishing much with robust collaboration between large tech companies, startup entrepreneurs, investors, independent developers, general purpose computer owners, and non-profit software foundations. This was the time when organizations like the Linux Foundation, the Apache Software Foundation, and the Eclipse Foundation, found their footing and began organizing increasingly larger swaths of the industry around open source communities. The coalition started to fray in the early 2010s for a number of reasons, including the rise of cloud computing and smart phones, and the overall decline of free trade as a guiding principle shared by most mainstream political factions.

Open source grew in importance along with the world wide web, which was the other grand manifestation of the apex of neoliberal thought and the free trade era. These co-evolving movements, open source and the advocacy for the world wide web, were fueled by the belief, now debunked, that giving groups of people unfettered access to each other would result in a more educated public, greater understanding between groups, and a decline in conflicts and perhaps even war. The nation state, some thought, was starting to outlive its purpose and would soon slide into the dustbin of history. (side note: you have not lived until an open source community member unironically labels you a “statist”)

For a long time, open source participants happily continued down the path of borderless collaboration, falsely believing that the political earthquake that started in the mid-2010s woud somehow leave them untouched. This naivety ignored several simultaneous trends that spelled the end of an era: Russian influence peddling; brexit; the election of Trump; Chinese censorship, surveillance and state-sponsored hacking; and a global resurgence of illiberal, authoritarian governments. But even if one could ignore all of those geopolitical trends and movements, the technology industry alone should have signaled the end of an era. The proliferation of cryptocurrency, the growth of “AI”, and the use of open source tools to build data exploitation schemes should have been obvious clues that the geopolitical world was crashing our party. This blithe ignorance came to a screeching halt when a Microsoft employee discovered that state-sponsored hackers had infiltrated an open source project, XZ utils, installing a targeted backdoor 3 years after assumgin the ownership of a project.

One cannot overstate the impact of this event. For the first time, we had to actively monitor the threats from nation states wanting to exploit our open source communities to achieve geopolitical goals. The reactions were varied. After some time, the Linux Foundation finally admitted that it could no longer ignore the origins of its contributors, demoting the status of some Russian contributors. At the other end of the spectrum is Amanda Brock, who prefers to stay ensconced in her neoliberal bubble, unperturbed by the realities of our modern political landscape.

Amanda Brock, CEO of OpenUK, described the decision to remove Russian developers from patching the Linux kernel as “alarming”. In a LinkedIn post, she said: “At its heart, open source allows anyone to participate for any purpose. But as we have seen adoption of open source at scale in recent years, to the point where over 90% of the active codebases used by companies have dependencies on open source software, it’s understandable that concerns about risk have been raised by governments.”

One thing must be clear by now: we find ourselves knee-deep in a global conflict with fascist regimes who are united in their attempts to undermine free republics and democracies. As we speak, these regimes are looking to use open source communities and projects to accomplish their aims. They’ve done it with blockchains and cryptocurrencies. They’ve done it with malware. They’ve done it with the erosion of privacy and the unholy alliance of surveillance capitalism and state-sponsored surveillance. And they’re continuing to do it with the growth of the TESCREAL movement and the implementation of bias and bigotry through the mass adoption of AI tools. This is part and parcel of a plan to upend free thought and subjugate millions of people through the implementation of a techno oligarchy. I don’t doubt the utility of many of these tools — I myself use some of them. But I also cannot ignore how these data sets and tools have become beachheads for the world’s worst people. When Meta, Google, Microsoft or other large tech companies announce their support of fascism and simultaneously release new AI models that don’t disclose their data sets or data origins, we cannot know for sure what biases have been embedded. The only way we could know for sure is if we could inspect the raw data sources themselves, as well as the training scripts that were run on those data sets. The fact that we don’t have that information for any of these popular AI models means that we find ourselves vulnerable to the aims of global conglomerates and the governments they are working in tandem with. This is not where we want to be.

From where I stand, the way forward is clear: we must demand complete transparency of all data sources we use. We must demand complete transparency in how the models were trained on this data. To that end, I have been disappointed by almost every organization responsible for governing open source and AI ecosystems, from the Linux Foundation to the Open Source Initiative. None of them seem to truly understand the moment we are in, and none of them seem to be prepared for the consequences of inaction. While I do applaud the Linux Foundation’s application of scrutiny to core committers to its projects, they do seem to have missed the boat on the global fascist movement that threatens our very existence.

We have to demand that the organizations that represent us do better. We must demand that they recognize and meet the moment, because so far they have not.

Those of us who have been around the block in the high tech space can point to a number of moments where the hype went way beyond the actual value. The worst example of this was probably crypto and NFTs, which are slot machines built on a casino where the house definitely has the upper hand. The world of AI is the successor to crypto, with one very important difference: the tools that have been lumped under “AI” are actually useful, or potentially useful. But that is also part of the problem: because there are some well-known use cases, there’s a tendency to exaggerate the usefulness of the technology. There’s also a tendency to exaggerate the possibilities of the technology to the point of delusion.

Let’s start with the first problem: the term itself, “Artificial Intelligence”. It is neither “artificial” nor “intelligent”. What it actually is is advanced pattern recognition and language automation. For that insight, I credit Dr. Emily M. Bender, professor of linguistics and computational linguistics at the University of Washington. Labeling language automation tools as “AI” brings about the worst comparisons to dystopian sci-fi, but it also is, frankly, just wrong. No large language model is remotely sentient. None of the language automation tools are paving the way to Artificial General Intelligence (AGI) – the type of technology that “wakes up” and… makes us breakfast? provides tips on the betterment of humanity? decides humans have had their day and builds skynet? All of these scenarios are a bit silly, and the hype beasts concern trolling over implausible outcomes has become most wearisome.

While we were distracted by the dystopia vs utopia non-debate, real harms have been perpetrated against real humans with these tools. And with the increasing compute power behind these language models, the degree of potential harm grows with each passing day. Real harms in the form of disinformation, bias, devaluing of creative works, and a growing inability to retract or prevent any of these harms. Add to that the growing body of research that shows LLMs are vulnerable to data poisoning and reverse engineering of its training data and it’s clear that we haven’t quite thought out the ramifications of relying on these tools.

I’ll wrap up this blog post by (hopefully) stating the obvious: LLMs are obviously here to stay and can already do a number of useful things. I know I look forward to having an LLM fulfill my more mundane, rote tasks. But it’s crucial that we don’t anthropomorphize LLMs and ascribe to them characteristics that are definitely not there, however much we might wish them to be. It’s equally important not to buy into the dystopian doomerism about rogue AI, which is its own form of egregious hype. The more we worry about implausible hypotheticals, the more we risk missing the danger that’s here today. Humans were already good at institutionalizing bias and spreading misinformation. Now, with LLMs, we can do it faster and at a much larger scale. Buckle up!

My guiding lights on this topic are the amazing people of the DAIR Institute, led by founder Dr. Timnit Gebru. Other influences are Kim Crayton and the aforementioned Dr. Bender. Read them today – don’t believe the hype.