I’ve written a number of articles over the years about open source software supply chains, and some of the issues confronting open source sustainability. The ultimate thrust of my supply chain advocacy culminated in this article imploring users to take control of their supply chains. I naively thought that by bringing attention to supply chain issues, more companies would step up to maintain the parts that were important to them. When I first started brining attention to this matter, it was November 2014, when I keynoted for the first time at a Linux Foundation event. Over the next 3 years, I continued to evolve my view of supply chains, settling on this view of supply chain “funnels”:

Diagram of a typical open source supply chain funnel, where upstream comments are pulled into a distribution, packaged for widespread consumption and finally made into a product
Diagram of open source supply chian funnel

So, what has happened since I last published this work? On the plus side, lots of people are talking about open source supply chains! On the downside, no one is drawing the obvious conclusion: we need companies to step up on the maintenance of said software. In truth, this has always been the missing link. Unfortunately, what has happened instead is that we now have a number of security vendors generating lots of reports that show thousands of red lights flashing “danger! danger!” to instill fear in any CISO that open source software is going to be their undoing at any given moment. Instead of creating solutions to the supply chain problem, vendors have instead stepped in to scare the living daylights out of those assigned the thankless task of protecting their IT enterprises.

Securing Open Source Supply Chains: Hopeless?

Originally, Linux distributions signed on for the role of open source maintainers, but the world has evolved towards systems that embrace language ecosystems with their ever-changing world of libraries, runtimes, and frameworks. Providing secure, reliable distributions that also tracked and incorporated the changes of overlaid language-specific package management proved to be a challenge that distribution vendors have yet to adequately meet. The uneasy solution has been for distribution vendors to provide the platform, and then everyone re-invents (poorly) different parts of the wheel for package management overlays specific to different languages. In short, it’s a mess without an obvious solution. It’s especially frustrating because the only way to solve the issue in the current environment would be for a single vendor to take over the commercial open source world and enforce by fiat a single package management system. But that’s frankly way too much power to entrust to a single organization. The organizations designed to provide neutral venues for open source communities, foundations, have also not stepped in to solve the core issues of sustainability or the lack of package management standardization. There have been some efforts that are noteworthy and have made a positive impact, but not the extent that is needed. Everyone is still wondering why certain critical components are not adequately maintained and funded, and everyone is still trying to undertand how to make language-specific package ecosystems more resilient and able to withstand attacks from bad-faith users and developers. (note: sometimes the call *is* coming from inside the house)

But is the supply chain situation hopeless? Not at all. Despite the inability to solve the larger problems, the fact is that every milestone of progress brings us a step closer to more secure ecosystems and supply chains. Steps taken by multiple languages to institute MFA requirements for package maintainers, to use but one example, result in substantial positive impacts. These simple, relatively low-cost actions cover the basics that have long been missing in the mission to secure supply chains. But that brings us to a fundamental issue yet to be addressed: whose job is it to make supply chains more secure and resilient?

I Am Not Your Open Source Supply Chain

One of the better essays on this subject was written by Thomas Depierre titled “I Am Not a Supplier“. While the title is a bit cheeky and “clickbait-y” (I mean, you are a supplier, whether you like it or not) he does make a very pertinent – and often overlooked – point: developers who decide to release code have absolutely no relationship with commercial users or technology vendors, especially if they offer no commercial support of that software. As Depierre notes, the software is provided “as is” with no warranty.

Which brings us back to the fundamental question: if not the maintainers, whose responsibility is open source supply chains?

The 10% Rule

I would propose the following solution: If you depend on open source software, you have an obligation to contribute to its sustainability. That means if you sell any product that uses open source software, and if your enterprise depends on the use of open source software, then you have signed on to maintain that software. This is the missing link. If you use, you’re responsible. In all, I would suggest replacing 10% of your engineering spend with upstream open source maintenance, and I’ll show how it won’t break the budget. There are a number of ways to do this in a sustainable way that leads to higher productivity and better software:

  • Hire a maintainer for software you depend on – this is a brute force method, but it would be valuable for a particularly critical piece of software
  • Fund projects dedicated to open source sustainability. There are a number of them, many run out of larger software foundations, eg. The Linux Foundation, the ASF, Eclipse, the Python Software Foundation, and others.
  • Pay technology vendors who responsibly contribute to upstream projects. If your vendors don’t seem to support the upstream sources for their software, you may want to rethink your procurement strategies
  • Add a sustainability clause to your Software Bills of Materials (SBOM) requirements. Similar to the bullet above, if you start requiring your vendors to disclose their SBOMs, add a requirement that they contribute to the sustainability of the projects they build into their products.

There is, of course, still a need to coordinate and maximize the impact. Every critical piece of software infrastructure should be accounted for on a sustainability metric. Ideally, software foundations will step up as the coordinators, and I see some progress through the Alpha and Omega project. It doesn’t quite reach the scale needed, but it is a step in the right direction.

If you work for a company that uses a lot of open source software (and chances are that you do) you may want to start asking questions about whether your employers are doing their part. If you do the job well of sustaining open source software and hardening your supply chains, you can spend a lot less on “security” software and services that generate reports that show thousands of problems. By coordinating with communities and ecosystems at large, you can help solve the problem at the source and stop paying ambulance chasers that capitalize on the fear. That’s why spending 10% of your IT budget on open source sustainability will be budget neutral for the first 2 years and deliver cost savings beyond that. Additionally, your developers will learn how to maintain open source software and collaborate upstream, yielding qualitative benefits in the form of greater technology innovation.

Cory Doctorow published an excellent essay in Locus about the AI bubble and what will happen when (not if) it goes “bloop” as bubbles are wont to do. Namely, the money in the AI ecosystem is only sustainable if it allows programs to replace people, and due to the prevalence of high risk applications, that seems highly unlikely. I think he’s absolutely right – read that first.

Ok, done? Cool…

Reading Cory’s essay jogged my memory about some experiences I’ve had over my tech career. The first thought that came to mind was, haven’t we been through this before? Yes, we have. Several times. And each time we learn the same lesson the hard way: paradigm shifting tech transformations do not, in fact, result in large reductions of workers. Sometimes there may be displacement and reallocation, but never reductions. No, large reductions happen when businesses decide it’s time to trim across the board or exit certain businesses altogether.

One particular moment from my career came to mind. I was a product manager at large storage vendor. We had a assembled a small group of large company CTOs and were telling them about our latest roadmap for storage management automation. We had launched an automation product 3 years prior, and we wanted to assure them that we were committed to continuing our investment (spoiler alert: we were not, in fact, committed to that). So we went through the song and dance about all the great new things we were bringing to the product suite, about how it would solve problems and help our customers be more productive.

I’ll never forget one exchange with a particular CTO that is forever seared into my memory. He began by carefully choosing his words, mindful of their impact, but he finally said what was really on his mind, and likely for the rest of the group as well: “Will this let me fire some guys?” I was unprepared for this question. We had just spent the last 2 hours talking about increased productivity and efficiency from automation, so he drew what seemed to him to be a very logical conclusion from that. That is, if the product is as efficient and productive as we claimed, then surely he would be able to reduce staff. We hemmed and hawed and finally admitted that, no, we could not guarantee that it would, in his words, let him “fire some guys.” It was as if the air completely left the room. Whatever we said after that didn’t really matter, because it wouldn’t be the magic bullet that let everyone fire a bunch of staff.

This is a lesson that we keep learning and unlearning, over and over again. Remember cloud? Remember how that spelled the end of sysadmins and half of IT staff? Yeah, they’re still here, but their job titles have changed. Just because you moved things to the cloud doesn’t mean you can be hands off – you still need people to manage things. Remember Uber? None of these gazillion dollar swallowing enterprises or sub-industries of tech have generated anywhere near the original perceived value. And don’t even get me started on crypto, which never had any actual value. Cory’s point is the same: do you really think hospitals are going to fire their radiologists and put all patient screening and lab results in the hands of a machine learning (ahem: advanced pattern recognition) bot? Of course not. And so, a hospital administrator will ask, what’s the point? Do you really believe that hospitals are going to add tens or even hundreds of thousands of dollars to their annual budget to have both bots AND people? Don’t be absurd. They’ll be happy to make use of some free database provided by bots, but the humans in the loop will remain. Cory’s other example was self-driving cars. Do you think taxi or other transportation companies are going to pay both drivers (remote or otherwise) and bots for transit services? Be serious. And yet, that’s the only logical outcome, because there is no universe where humans will be taken out of this very high risk loop.

The problem is that this is no justification for the billions of dollars being invested in this space. End user companies will happily make use of free tools, keep their humans, and spend as little as possible on tech. That part will not change. So who, then, is going to justify the scope of current investments? No one. That’s why it’s a bubble. Cory’s right. The only thing that remains to be seen is who gets harmed in the aftermath and how badly.

The intended buyers of this technology are going to ask the same question as that CTO from years ago: will it let me fire some guys? The answer is no. It is always no.

Those of us who have been around the block in the high tech space can point to a number of moments where the hype went way beyond the actual value. The worst example of this was probably crypto and NFTs, which are slot machines built on a casino where the house definitely has the upper hand. The world of AI is the successor to crypto, with one very important difference: the tools that have been lumped under “AI” are actually useful, or potentially useful. But that is also part of the problem: because there are some well-known use cases, there’s a tendency to exaggerate the usefulness of the technology. There’s also a tendency to exaggerate the possibilities of the technology to the point of delusion.

Let’s start with the first problem: the term itself, “Artificial Intelligence”. It is neither “artificial” nor “intelligent”. What it actually is is advanced pattern recognition and language automation. For that insight, I credit Dr. Emily M. Bender, professor of linguistics and computational linguistics at the University of Washington. Labeling language automation tools as “AI” brings about the worst comparisons to dystopian sci-fi, but it also is, frankly, just wrong. No large language model is remotely sentient. None of the language automation tools are paving the way to Artificial General Intelligence (AGI) – the type of technology that “wakes up” and… makes us breakfast? provides tips on the betterment of humanity? decides humans have had their day and builds skynet? All of these scenarios are a bit silly, and the hype beasts concern trolling over implausible outcomes has become most wearisome.

While we were distracted by the dystopia vs utopia non-debate, real harms have been perpetrated against real humans with these tools. And with the increasing compute power behind these language models, the degree of potential harm grows with each passing day. Real harms in the form of disinformation, bias, devaluing of creative works, and a growing inability to retract or prevent any of these harms. Add to that the growing body of research that shows LLMs are vulnerable to data poisoning and reverse engineering of its training data and it’s clear that we haven’t quite thought out the ramifications of relying on these tools.

I’ll wrap up this blog post by (hopefully) stating the obvious: LLMs are obviously here to stay and can already do a number of useful things. I know I look forward to having an LLM fulfill my more mundane, rote tasks. But it’s crucial that we don’t anthropomorphize LLMs and ascribe to them characteristics that are definitely not there, however much we might wish them to be. It’s equally important not to buy into the dystopian doomerism about rogue AI, which is its own form of egregious hype. The more we worry about implausible hypotheticals, the more we risk missing the danger that’s here today. Humans were already good at institutionalizing bias and spreading misinformation. Now, with LLMs, we can do it faster and at a much larger scale. Buckle up!

My guiding lights on this topic are the amazing people of the DAIR Institute, led by founder Dr. Timnit Gebru. Other influences are Kim Crayton and the aforementioned Dr. Bender. Read them today – don’t believe the hype.