Protester in a head covering faces a line of riot squad law enforcement and places a flower into one of the riot shields
(This was originally posted on medium.com)

I have been struggling recently with where to direct my focus and what I could write about that would add something material to the ongoing debates on “AI”, technology, and politics. Thanks to my friend Randy Bias for this post that inspired me to follow up:

Screenshot of Randy Bias post on LinkedIn “I notice that a lot of the open source world gets uncomfortable when I start talking about how geopolitics is now creating challenges for open source. I don’t understand this. It’s provably true. Even things at the margins, like the Llama 4 release, which is technically not ‘open’ has a restriction against EU usage. We *must* talk about the geopolitical realities and look for solutions rather than letting us be driven by realtime political trends…”

This post triggered a few thoughts I’ve been having on the subject. Namely, that open source was born at a time that coincided with the apex of neoliberal thought, corresponding with free trade, borderless communication and collaboration, and other naive ideologies stemming from the old adage “information wants to be free”. Open source, along with its immediate forbear free software, carried with it a techno-libertarian streak that proliferated throughout the movement. Within the open source umbrella, there was a wide array of diverse factions: the original free software political movement, libertarian entrepreneurs and investors, anarcho-capitalists, political liberals and progressives, and a hodgepodge of many others who came around to see the value of faster collaboration enabled by the internet. There was significant overlap amongst the factions, and the coalition held while each shared mutual goals.

From 1998, when the term “open source” was coined, until the early 2010’s, this coalition held strong, accomplishing much with robust collaboration between large tech companies, startup entrepreneurs, investors, independent developers, general purpose computer owners, and non-profit software foundations. This was the time when organizations like the Linux Foundation, the Apache Software Foundation, and the Eclipse Foundation, found their footing and began organizing increasingly larger swaths of the industry around open source communities. The coalition started to fray in the early 2010s for a number of reasons, including the rise of cloud computing and smart phones, and the overall decline of free trade as a guiding principle shared by most mainstream political factions.

Open source grew in importance along with the world wide web, which was the other grand manifestation of the apex of neoliberal thought and the free trade era. These co-evolving movements, open source and the advocacy for the world wide web, were fueled by the belief, now debunked, that giving groups of people unfettered access to each other would result in a more educated public, greater understanding between groups, and a decline in conflicts and perhaps even war. The nation state, some thought, was starting to outlive its purpose and would soon slide into the dustbin of history. (side note: you have not lived until an open source community member unironically labels you a “statist”)

For a long time, open source participants happily continued down the path of borderless collaboration, falsely believing that the political earthquake that started in the mid-2010s woud somehow leave them untouched. This naivety ignored several simultaneous trends that spelled the end of an era: Russian influence peddling; brexit; the election of Trump; Chinese censorship, surveillance and state-sponsored hacking; and a global resurgence of illiberal, authoritarian governments. But even if one could ignore all of those geopolitical trends and movements, the technology industry alone should have signaled the end of an era. The proliferation of cryptocurrency, the growth of “AI”, and the use of open source tools to build data exploitation schemes should have been obvious clues that the geopolitical world was crashing our party. This blithe ignorance came to a screeching halt when a Microsoft employee discovered that state-sponsored hackers had infiltrated an open source project, XZ utils, installing a targeted backdoor 3 years after assumgin the ownership of a project.

One cannot overstate the impact of this event. For the first time, we had to actively monitor the threats from nation states wanting to exploit our open source communities to achieve geopolitical goals. The reactions were varied. After some time, the Linux Foundation finally admitted that it could no longer ignore the origins of its contributors, demoting the status of some Russian contributors. At the other end of the spectrum is Amanda Brock, who prefers to stay ensconced in her neoliberal bubble, unperturbed by the realities of our modern political landscape.

Amanda Brock, CEO of OpenUK, described the decision to remove Russian developers from patching the Linux kernel as “alarming”. In a LinkedIn post, she said: “At its heart, open source allows anyone to participate for any purpose. But as we have seen adoption of open source at scale in recent years, to the point where over 90% of the active codebases used by companies have dependencies on open source software, it’s understandable that concerns about risk have been raised by governments.”

One thing must be clear by now: we find ourselves knee-deep in a global conflict with fascist regimes who are united in their attempts to undermine free republics and democracies. As we speak, these regimes are looking to use open source communities and projects to accomplish their aims. They’ve done it with blockchains and cryptocurrencies. They’ve done it with malware. They’ve done it with the erosion of privacy and the unholy alliance of surveillance capitalism and state-sponsored surveillance. And they’re continuing to do it with the growth of the TESCREAL movement and the implementation of bias and bigotry through the mass adoption of AI tools. This is part and parcel of a plan to upend free thought and subjugate millions of people through the implementation of a techno oligarchy. I don’t doubt the utility of many of these tools — I myself use some of them. But I also cannot ignore how these data sets and tools have become beachheads for the world’s worst people. When Meta, Google, Microsoft or other large tech companies announce their support of fascism and simultaneously release new AI models that don’t disclose their data sets or data origins, we cannot know for sure what biases have been embedded. The only way we could know for sure is if we could inspect the raw data sources themselves, as well as the training scripts that were run on those data sets. The fact that we don’t have that information for any of these popular AI models means that we find ourselves vulnerable to the aims of global conglomerates and the governments they are working in tandem with. This is not where we want to be.

From where I stand, the way forward is clear: we must demand complete transparency of all data sources we use. We must demand complete transparency in how the models were trained on this data. To that end, I have been disappointed by almost every organization responsible for governing open source and AI ecosystems, from the Linux Foundation to the Open Source Initiative. None of them seem to truly understand the moment we are in, and none of them seem to be prepared for the consequences of inaction. While I do applaud the Linux Foundation’s application of scrutiny to core committers to its projects, they do seem to have missed the boat on the global fascist movement that threatens our very existence.

We have to demand that the organizations that represent us do better. We must demand that they recognize and meet the moment, because so far they have not.

Illustration of cool blue water drop calmly patting a fiery drop that looks angry

I recently posted my thoughts on “AI Native” and automation, and how recent developments in the model and agent ecosystem were about to significantly disrupt our application lifecycle management tools, or what we have erroneously labeled “DevOps”. This entire post is based on that premise. If I turn out to be wrong, pretend like I never said it 🙂  Considering that a lot of open source time and effort goes into infrastructure tooling – Kubernetes, OpenTofu, etc – one might draw the conclusion that, because these many of these tools are about to be disrupted and made obsolete, therefore “OMG Open Source is over!!11!!1!!”. Except, my conclusion is actually the opposite of that. I’ll explain.

Disclaimer: You cannot talk about “AI” without also mentioning that we have severe problems with inherent bias in models, along with the severe problems of misinformation, deepfakes, energy and water consumption, etc. It would be irresponsible of me to blog about “AI” without mentioning the significant downsides to the proliferation of these technologies. Please take a minute and familiarize yourself with DAIR and its founder, Dr. Timnit Gebru.

How Open Source Benefits from AI Native

As I mentioned previously, my “aha!” moment was when I realized that AI Native automation basically removes the need for tools like Terraform or OpenTofu. I concluded that tools that we have come to rely on will be replaced – slowly at first, and then accelerating – by AI Native systems based on models and agents connected by endpoints and interfaces adhering to standard protocols like MCP. This new way of designing applications will become this generation’s 3-tiered web architecture. As I said before, it will be models all the way down. Composite apps will almost certainly have embedded data models somewhere, the only question is to what extent.

The reason that open source will benefit from this shift (do not say paradigm… do not say paradigm… do not say…) is the same reason that open source benefited from cloud. A long long time ago, back in the stone age, say 2011 – 2016, there were people who claimed that the rise of SaaS, cloud, and serverless computing would spell the end of open source development, because “nobody needed access to source code anymore.” A lot of smart people made this claim – honest! But I’m sorry, it was dumb. If you squint really hard, you can sort of see how one might arrive at that conclusion. That is, if you made the erroneous assumption that none of these SaaS and cloud ran on, ya know, software. To his credit, Jim Zemlin, the Linux Foundation czar, never fell for this and proclaimed that open source and cloud went together like “peanut butter and chocolate” – and he was right. Turns out, using SaaS and cloud-based apps meant you were using somebody else’s computer, which used – wait for it – software. And how did tech companies put together that software so efficiently and cheaply? That’s right – they built it all on open source software. The rise of cloud computing didn’t just continue or merely add to the development of open source, it supercharged it. One might say that without open source software, SaaS and cloud native apps could never have existed.

I know that history doesn’t repeat itself, per se, and that it rhymes. In the case of AI Native, there’s some awfully strong rhyming going on. As I have mentioned before, source code is not a particularly valuable commodity and nothing in the development of AI native will arrest that downward trend. In fact, it will only accelerate it. As I mentioned in my first essay on the topic of open source economics, There is no Open Source Community, the ubiquity of software development and the erosion of geographical borders makes for cheaper software asymptotic to zero cost. This makes the cost of producing software very cheap and the cost of sharing the software pretty much $0. In an environment of massive heaps of software flying around the world hither and yon, there is a strong need for standard rules of engagement, hence the inherent usefulness of standard open source licensing, eg. the Apache License or the GNU General Public License.

There are 2 key insights that I had recently on this subject: The first was in part 1 of this essay: devops tools are about to undergo severe disruption. The 2nd is this: the emergence of bots and agents editing and writing software does not diminish the need for standard rules of engagement; it increases the need. It just so happens that we now have 25+ years of standard rules of engagement for writing software in the form of the Open Source Definition and the licenses that adhere to its basic principles. Therefore, the only logical conclusion is that the production of open source code is about to accelerate. I did not say “more effective” or “higher quality” code, simply that there will be more of it.

InnerSource, Too

The same applies to InnerSource, the application of open source principles in an internal environment behind the firewall. If AI Native automation is about to take over devops tooling, then it stands to reason that these models and agents used internally will need rules of engagement for submitting pull/merge requests, fixing bugs, remediating vulnerabilities, and submitting drafts of new features. Unfortunately, whereas the external world is very familiar with open source rules of engagement, internal spaces have been playing catchup… slowly. Whereas b2b software development has all occurred in open source spaces, large enterprises have instead invested millions of $s in Agile and automation tooling while avoiding the implementation of open source collaboration for engineers. I have a few guesses as to why that is the case but regardless, companies will now have to accelerate their adoption of InnerSource rules to make up for 25 years of complacency. If they don’t, they’re in for a world of hurt, because everyone will either follow different sets of rules, or IT will clamp down and allow none of it, raising obstacles to the effectiveness of their agents. Think about an agent, interacting with MCP-based models, looking to push a new version of a YAML file into a code repo. But they can’t, because someone higher up decided that such activities were dangerous and never bothered to build a system of governance around it.

Mark my words: the companies that make the best use of AI Native tools will be open source and InnerSource savvy.