Those of us who have been around the block in the high tech space can point to a number of moments where the hype went way beyond the actual value. The worst example of this was probably crypto and NFTs, which are slot machines built on a casino where the house definitely has the upper hand. The world of AI is the successor to crypto, with one very important difference: the tools that have been lumped under “AI” are actually useful, or potentially useful. But that is also part of the problem: because there are some well-known use cases, there’s a tendency to exaggerate the usefulness of the technology. There’s also a tendency to exaggerate the possibilities of the technology to the point of delusion.
Let’s start with the first problem: the term itself, “Artificial Intelligence”. It is neither “artificial” nor “intelligent”. What it actually is is advanced pattern recognition and language automation. For that insight, I credit Dr. Emily M. Bender, professor of linguistics and computational linguistics at the University of Washington. Labeling language automation tools as “AI” brings about the worst comparisons to dystopian sci-fi, but it also is, frankly, just wrong. No large language model is remotely sentient. None of the language automation tools are paving the way to Artificial General Intelligence (AGI) – the type of technology that “wakes up” and… makes us breakfast? provides tips on the betterment of humanity? decides humans have had their day and builds skynet? All of these scenarios are a bit silly, and the hype beasts concern trolling over implausible outcomes has become most wearisome.
While we were distracted by the dystopia vs utopia non-debate, real harms have been perpetrated against real humans with these tools. And with the increasing compute power behind these language models, the degree of potential harm grows with each passing day. Real harms in the form of disinformation, bias, devaluing of creative works, and a growing inability to retract or prevent any of these harms. Add to that the growing body of research that shows LLMs are vulnerable to data poisoning and reverse engineering of its training data and it’s clear that we haven’t quite thought out the ramifications of relying on these tools.
I’ll wrap up this blog post by (hopefully) stating the obvious: LLMs are obviously here to stay and can already do a number of useful things. I know I look forward to having an LLM fulfill my more mundane, rote tasks. But it’s crucial that we don’t anthropomorphize LLMs and ascribe to them characteristics that are definitely not there, however much we might wish them to be. It’s equally important not to buy into the dystopian doomerism about rogue AI, which is its own form of egregious hype. The more we worry about implausible hypotheticals, the more we risk missing the danger that’s here today. Humans were already good at institutionalizing bias and spreading misinformation. Now, with LLMs, we can do it faster and at a much larger scale. Buckle up!
My guiding lights on this topic are the amazing people of the DAIR Institute, led by founder Dr. Timnit Gebru. Other influences are Kim Crayton and the aforementioned Dr. Bender. Read them today – don’t believe the hype.