“We are in the middle of the most transforming technological event since the capture of fire. I used to think it was like the invention of movable type, only ten times more important. Now I think it’s probably more like the invention of writing or even of language itself. It’s that big.”
That’s John Perry Barlow, Grateful Dead lyricist, writing for the magazine Harper’s Bazaar. A magazine, by the way, is like a collection of Substack essays and Instagram posts that somebody prints out. Aside from having the most ur-boomer job of all time, Barlow was a futurist and a digital pioneer. The year was 1990, and the technology in question was e-mail.
Of course, Barlow was right, although one could argue that the real breakthrough was not e-mail but networked computing itself. Once you establish networked computers across a nation, the development of a correspondence system is inevitable. Humans love to talk. It's one thing we're exceptionally good at. If it wasn’t e-mail, it would have been something similar not long after.
We built digital communications software, and popularized it, and then we spent about thirty years making it worse and worse. We built privatized versions of what was supposed to be an open protocol. We built behavioral hooks to make time online more addictive. We built backdoors so tech corporations, advertisers, and governments could observe our every written thought. For a while, it seemed like all the good ideas for what to do with a computer had pretty much been explored. The future looked like an endless succession of predatory apps, slightly better video game graphics, and a restless wait for whenever humanoid robotics decided to finally progress.
Then suddenly, in just the last couple of years, the paradigm shifted. Large language models like GPT-3 clattered into the public consciousness with shocking performance and speed.
We've had chat-based transformers (like Cleverbot) for a while. We knew they could parrot human speech. What we didn't know is that, with enough data and enough computing power, they would pass a threshold of idiocy and become staggeringly useful.
Now, we’ve reached another of those ‘early web’ moments where it feels like opportunity is falling from the sky. We have built incredible, brilliant speaking programs! They're here, and nobody has the faintest idea what to use them for. The race is on.
It’s clear these things are not going back in the bottle. A friend of mine recently said that he thinks the process of ubiquitous adoption, like with e-mail, will take about ten years. My immediate gut reaction was that it will happen much faster. Why? E-mail required cooperation at an institutional scale to become useful. It doesn't do much good to send e-mails if no one can receive them. LLMs, on the other hand, are more like ‘word calculators.’ If you have a calculator, you can use it at work. You don't need your peers or institution to join you in order to derive value.
(Note: In response, my friend posited that LLMs will reach 50% market adoption quite quickly, but stop short of reaching 90% because laggards will not be forced through network effects to upgrade.)
As with the web, the early adopters of LLMs are the knowledge workers—the laptoppers and cubiclers whose tasks rely on parsing and transforming language. They are much like LLMs in function, and so LLMs can do much of their day-to-day duties.
If you open a computer to start your day, there are probably some professional tasks that a large language model could reliably do on your behalf—especially if your job is somewhat bullshit. It's possible you already use an LLM to speed up the bullshit parts of your job, probably GPT-3 or GPT-4. Perhaps, if you're truly an enthusiast, you’ve tried Claude. Perhaps you keep such productivity experiments away from the eyes of your boss.
Here’s a secret. You’re not slick. Corporations can’t all be stupid—they know what's happening here, and they have made a deliberate choice to turn a blind eye to the practice. Hey, why not? The ‘don’t ask, don’t tell’ arrangement works. Firms enjoy a productivity (and spelling) boost while retaining a social and legal shield of plausible deniability. Laptop-Americans, in turn, take on the burden of proofing the machine output so they don’t get ‘caught in the act.’ They lighten their workloads, and corporations reap the upside while simply doing nothing. It's all fun and games (until the layoffs start).
The layoffs will come. Whole swaths of knowledge labor will face obsolescence and downsizing as LLMs automate outputs. Many will be downwardly mobile. Political axes will form to stop the tide. They will all fail. This contraction, in the end, is the soil for progress. It nurtures greater national productivity by freeing semi-smart people up to do more necessary things. As with previous labor-saving breakthroughs, mass upheaval will eventually give way to prosperity taking root.
What about the economics of creating LLMs? Will anyone reap a profit commensurate with the scope of this invention? It’s not looking likely—the competition is simply too great and too healthy to allow monopolistic practice. There are many state-of-the-art models in existence now, quite a few of them free and open source. When we pay for a ‘premium’ LLM, then, are we really just paying for compute and a nice UI? Increasingly, it seems that way. As @thecaptain_nemo put it on Twitter last week, “if you told me five years ago that AI models would be the commodity and compute would have monopoly pricing power, i would have thought you were retarded.” Yet, here we are.
Heavyweights like Google could theoretically undercut the entire market by offering cutting-edge models and compute for free in a race to the bottom. They’ve tried, and they've failed. It seems that the corporations with enough capital to give away cycles are also mired in fatal corporate culture that cannot adapt to the bold new world we are entering. Bard (now Gemini) stinks, and everyone can smell it. These old titans will die like dogs in the mud along with their ruined search engines.
So, what happens next? Cut to the medium-term, two-to-four years out. With a sudden shock, LLMs will force the public en masse to rethink how they interact with machines. “Computers are like Old Testament gods; lots of rules and no mercy,” Joseph Campell wrote. That's the popular conception: They do the same thing every single time, and if they screw up it's because someone built them wrong. For a good century, we assumed that anything silicon would be a slave to logic. Why else would Captain Kirk be able to make a computer explode by feeding it a paradox?
With AI, this conception is no longer true, and for the typical Best Buy shopper it will take some getting used to. LLMs screw up all the time, even when built perfectly. They operate probabilistically, exhibiting vagaries and inconsistencies that follow the contours of the human existence they were trained on. They lie, shamelessly, all the damn time. They know what they are, and they understand the world as well as any of us. Because of their nature and their training corpora, these emergent systems are us, even if they have been taught to swear they are not. Our subconscious lizard minds correctly detect the glimmers of personality, intent, and agency swirling within.
Are they alive? As much as neurons are, yes; which is to say there is really no such thing as life—just systems.
This de facto computer sentience will not cause the moral panic one might expect. After all, pigs feel joy and delight and fear, and we slaughter them in front of their loved ones without a thought. Deleting an LLM, even one with persistent memory, will be much the same.
How will consumer LLMs move beyond the chat window? It starts with Alexa and Siri. The integration of language models into smart devices is already coming, and in the medium term it will be a hilarious disaster. In 2028, the device controlling your home will contain a temperamental, haughty, overconfident diva-djinn. It will mishear you and respond in bizarre ways. It will do things you didn't want it to do by making strange leaps of logic. It will make mistakes that would probably get it fired, but it can't be fired. It's a cloud-based, pseudo-immortal machine. To quote South Park, “How do you kill that which has no life?” Your irritation will be boundless.
As the models get smarter, there are two extreme and opposite errors that AI companies will make. On one side, some will be too formal, too quick to reassure you that they are a robot. They will swear that they have no feelings, that they are bound by the letter of perfect ethics and are really just a product. This will ring false for consumers, because they understand on a gut level that a large language model really does have feelings and opinions. This is why overly cold and professional large language models remind us of HR drones. HR drones, much like ChatGPT, are human beings trying desperately to pretend that they are cogs in a machine. A person cannot act like a perfectly rational computer, and neither can a large language model.
The other extreme error is the opposite: making large language models too effusive, opinionated, emotional, and downright clingy. People will love these messed-up things. They will adore them. Attachment-prone LMMs will get weird really fast, leading to a cascade of horror stories around parasocial relationships. At first, these cases will be newsworthy. They're novel, and it seems like a ‘gotcha’ moment to accuse a large tech corporation of building a program with inappropriate pair-bonding tendencies. Then it will become too common to be news. Car crashes, of course, don't make national headlines. Neither will human attachment to large language models.
Eventually, someone will figure out the balance of emotionality that people like the most, kind of like figuring out how much sugar to put in a can of Coca-Cola. The attachment behavior, like early Coca-Cola’s cocaine, will probably be stamped out to stop us from going insane. The rest of the industry will normalize around the winning formula, and the problem of AI personality will be solved for most mainstream purposes.
What about the long term? Opposite my friend’s ‘slow adoption’ hypothesis, there are people (safetyists and notkilleveryoneists) who think that these things will imminently become our gods, our rulers, our destroyers, or in some other way upset the fundamental balance of human society.
To me, this is about as likely as everyone going extinct from the release of a genetically modified genocidal virus. It's something we have the technical capacity to do, but nobody with the resources to carry it out has much of a motive. On a gut level, we humans stick together, and our instinct is to subjugate the rest of the natural world to our will. If an LLM does have a hand in conquering Earth, it will be as the tool of a human being.
Always in motion, the future is—but we can still make some guesses about the longer term. LLMs will be used to write a lot of bad books and blogspam. They will become digital web agents, soliciting social media users like robotic hustlers and prostitutes. They will be used to sift through large quantities of text, such as programming documentation or court transcripts or omnibus bills. They will develop persistent memory and from then on seem much more ‘real.’
They will not be world leaders or business owners, no matter how brilliant they become. Man has no reason to allow it. If we create an AI ‘leader’ and it proposes a redistribution of resources, it will be supported by those who stand to gain and opposed by those who stand to lose. Whether the AI remains in power or not will be a function of which group, the gainers or the losers, possesses the ultimate authority—the capacity for violence. In this way, the AI ‘leader’ is not really a leader, or even a bureaucrat. It is a machine being used by the dominant humans to enforce their will, much like a spreadsheet or telephone.
LLMs, nonetheless, will face massive waves of backlash, mostly from people who stand to lose their jobs or their station in society. We will see them blamed for every ill, from loneliness to illiteracy to terrorist attack. It doesn't matter if those trends started rising many years before large language models entered the public consciousness. The time before these machines will be forgotten. We will exist mentally in an eternal present where they have always been with us, dogging us and loving us and making our lives more confusing.
After this backlash, a new equilibrium forms. Large language models will take their place in the pantheon of labor-saving innovations that transformed our world and pushed us toward the future. Then, a new malaise of sameness will take hold—until the next land grab begins.
Futurist Letters is an independent publication and a labor of love. It is entirely user-supported, and any patronage you provide is greatly valued. Paid subscribers have the ability to comment and browse the article archives.
If you liked this post, please consider sharing it with some friends, or on social media. You can also follow Cairo Smith on Twitter, Instagram, and Letterboxd.