Illustration of cool blue water drop calmly patting a fiery drop that looks angry

I recently posted my thoughts on “AI Native” and automation, and how recent developments in the model and agent ecosystem were about to significantly disrupt our application lifecycle management tools, or what we have erroneously labeled “DevOps”. This entire post is based on that premise. If I turn out to be wrong, pretend like I never said it 🙂  Considering that a lot of open source time and effort goes into infrastructure tooling – Kubernetes, OpenTofu, etc – one might draw the conclusion that, because these many of these tools are about to be disrupted and made obsolete, therefore “OMG Open Source is over!!11!!1!!”. Except, my conclusion is actually the opposite of that. I’ll explain.

Disclaimer: You cannot talk about “AI” without also mentioning that we have severe problems with inherent bias in models, along with the severe problems of misinformation, deepfakes, energy and water consumption, etc. It would be irresponsible of me to blog about “AI” without mentioning the significant downsides to the proliferation of these technologies. Please take a minute and familiarize yourself with DAIR and its founder, Dr. Timnit Gebru.

How Open Source Benefits from AI Native

As I mentioned previously, my “aha!” moment was when I realized that AI Native automation basically removes the need for tools like Terraform or OpenTofu. I concluded that tools that we have come to rely on will be replaced – slowly at first, and then accelerating – by AI Native systems based on models and agents connected by endpoints and interfaces adhering to standard protocols like MCP. This new way of designing applications will become this generation’s 3-tiered web architecture. As I said before, it will be models all the way down. Composite apps will almost certainly have embedded data models somewhere, the only question is to what extent.

The reason that open source will benefit from this shift (do not say paradigm… do not say paradigm… do not say…) is the same reason that open source benefited from cloud. A long long time ago, back in the stone age, say 2011 – 2016, there were people who claimed that the rise of SaaS, cloud, and serverless computing would spell the end of open source development, because “nobody needed access to source code anymore.” A lot of smart people made this claim – honest! But I’m sorry, it was dumb. If you squint really hard, you can sort of see how one might arrive at that conclusion. That is, if you made the erroneous assumption that none of these SaaS and cloud ran on, ya know, software. To his credit, Jim Zemlin, the Linux Foundation czar, never fell for this and proclaimed that open source and cloud went together like “peanut butter and chocolate” – and he was right. Turns out, using SaaS and cloud-based apps meant you were using somebody else’s computer, which used – wait for it – software. And how did tech companies put together that software so efficiently and cheaply? That’s right – they built it all on open source software. The rise of cloud computing didn’t just continue or merely add to the development of open source, it supercharged it. One might say that without open source software, SaaS and cloud native apps could never have existed.

I know that history doesn’t repeat itself, per se, and that it rhymes. In the case of AI Native, there’s some awfully strong rhyming going on. As I have mentioned before, source code is not a particularly valuable commodity and nothing in the development of AI native will arrest that downward trend. In fact, it will only accelerate it. As I mentioned in my first essay on the topic of open source economics, There is no Open Source Community, the ubiquity of software development and the erosion of geographical borders makes for cheaper software asymptotic to zero cost. This makes the cost of producing software very cheap and the cost of sharing the software pretty much $0. In an environment of massive heaps of software flying around the world hither and yon, there is a strong need for standard rules of engagement, hence the inherent usefulness of standard open source licensing, eg. the Apache License or the GNU General Public License.

There are 2 key insights that I had recently on this subject: The first was in part 1 of this essay: devops tools are about to undergo severe disruption. The 2nd is this: the emergence of bots and agents editing and writing software does not diminish the need for standard rules of engagement; it increases the need. It just so happens that we now have 25+ years of standard rules of engagement for writing software in the form of the Open Source Definition and the licenses that adhere to its basic principles. Therefore, the only logical conclusion is that the production of open source code is about to accelerate. I did not say “more effective” or “higher quality” code, simply that there will be more of it.

InnerSource, Too

The same applies to InnerSource, the application of open source principles in an internal environment behind the firewall. If AI Native automation is about to take over devops tooling, then it stands to reason that these models and agents used internally will need rules of engagement for submitting pull/merge requests, fixing bugs, remediating vulnerabilities, and submitting drafts of new features. Unfortunately, whereas the external world is very familiar with open source rules of engagement, internal spaces have been playing catchup… slowly. Whereas b2b software development has all occurred in open source spaces, large enterprises have instead invested millions of $s in Agile and automation tooling while avoiding the implementation of open source collaboration for engineers. I have a few guesses as to why that is the case but regardless, companies will now have to accelerate their adoption of InnerSource rules to make up for 25 years of complacency. If they don’t, they’re in for a world of hurt, because everyone will either follow different sets of rules, or IT will clamp down and allow none of it, raising obstacles to the effectiveness of their agents. Think about an agent, interacting with MCP-based models, looking to push a new version of a YAML file into a code repo. But they can’t, because someone higher up decided that such activities were dangerous and never bothered to build a system of governance around it.

Mark my words: the companies that make the best use of AI Native tools will be open source and InnerSource savvy.

2 thoughts on “The Rise of AI Native: Open Source Ecosystems

Likes

Leave a Reply

Only people in my network can comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To respond on your own website, enter the URL of your response which should contain a link to this post's permalink URL. Your response will then appear (possibly after moderation) on this page. Want to update or remove your response? Update or delete your post and re-enter your post's URL again. (Find out more about Webmentions.)