Governing by apocalypse
The sudden wave of news stories, NGOs, petitions, and social media posts insisting on the "existential threat" of AI raises many questions. Here are some answers.
I’ve written a long essay explaining why AI is a gift and not a threat. But you can't explain in a few pages how a large language model is changing the way humanity interacts with its own knowledge. You have to lay the groundwork - define terms, look at historical development, present the inner workings of its various parts. To present new ideas, you also need to explain them in detail, and that is inevitably long. But I can explain in a much shorter text why the "existential" AI scare about machines wiping out the human race is just a scare tactic designed to allow the techno-political elites to seize control of the field.
Cries that scientific evolution will end the human race are as old as science itself. Science has always been perceived as "demonic" (and there is some truth in that, but it is not relevant to our topic here). No more than ten years ago, there was an intense (and supposedly serious) discussion in the media about how the collisions at the LHC at CERN in Switzerland would create a black hole and end the world. The fact is that apocalypses still sell news. They're inevitably a popular subject for endless fantasies, mainly because they involve the mother of all fears - the fear of death.
In a masterstroke, the establishment turned this on its head during COVID. Suddenly, nature was the killer and science the savior. But the potential apocalypse sold really well. Especially vaccines. From a public health perspective, it was a disaster that we have yet to see in full. But from a control perspective, it was amazing. It was proof that proper control of the media, using a serious scare tactic, can easily turn billions of people into scared sheep, with no will to do anything but follow their governments.
It helped that a successful scare campaign to turn a political opponent, Trump, into a scare crow, has just ended. It doesn't matter if they used lies, the "Russia collusion" scare was effective. They really liked effectiveness more than truth1.
But now the sudden threat of AI as a tool to empower individuals and not governments has caught them off guard. They were slow to react because they didn't see the implications when OpenAI released its first versions of chatGPT. My guess is that no one saw them, and very few see them today. But when they did, they really flipped out.
What followed was an endless, sudden stream of articles about AI ending the human race popping up everywhere2 and petitions calling for everybody to stop3. None of them clearly explain how AI will accomplish this or why. All of them disingenuously conflate AI and AGI (Artificial General Intelligence), the latter of which is non-existent, useless to humanity, and most likely impossible to build. And even some of first ones were proven to have manipulated signatures, we still have a lot of NGOs popping up with petitions signed by thousands of "luminaries" to "stop until it is too late". One of these "petitions" is just a single sentence that looks like it was taken straight out of their government "scare tactics" manual, mashing up AI, COVID and Ukraine:
Incredible as it may seem, this is signed by none other than the chairman of Open AI, the company that brought us chatGPT, Sam Altman! The same Sam Altman who, earlier on, in his Senate testimony on AI, called for government regulation of the field, making him the first private company executive to ask the government to restrict his own field! Yet, a company called “Open AI”!
Anyone who has just skimmed the articles on Twitter Files can see that it reeks of government manipulation. It's the same old mechanism of running a scare campaign like the ones of the last five years by rolling made up news across media, creating the illusion that everyone is behind it.
To understand what scares them so much that they want to scare the s**t out of us, a simple example will help. Suppose tomorrow an AI company releases an open source version of LLM with the following capabilities:
It can run on any desktop computer with a powerful graphics card (under 1000$ in total)
It has a dedicated interface that allows you to define what subject or field you want it to "ingest" from the Internet.
It has a simple interface for training, which makes it easy to guide it in the initial phase, and even suggests a mechanism for collaborative training (where similar interests are grouped together, like on Quora).
Say you're interested in genetics, buy one of these machines4 and ask it to ingest all the genetics texts it can find on the web. If the training phase is effective (this is the hardest part), you'll end up with a machine that can answer almost anything about genetics.
It's like having chatGPT trained on almost the entire field of genetics, but in your house, with no one able to control what’s in it and what you learn from it, with no " safety " mechanisms behind it.
Keep in mind that such a machine crawling the Internet will have no problem crawling the Onion Network and even the Deep Web, ingesting practically all scientific texts out there. The amount of power such a machine will give you is unbelievable5. In fact, it is to be expected that theoretical scientific research outside of universities and research centers will explode.
And here are just two consequences:
Copyright may disappear. There is no citation, no specific text of someone that the machine gets back to your questions. Everything it takes in is mashed up and knowledge is extracted from it. It's like looking for grapes in a glass of wine.
We won't need Google or any other search engine except for mundane tasks. In fact you can have your own personal search engine and all the info about you Google is selling for good money today, may vanish.
Most importantly, governments are in danger of completely losing control of the information and manipulation institutions they have painstakingly built over the past decade. There is no "narrative control" possible in a word full of personal LLM's that anyone can train on anything, because governments can only control truth by hiding it in a deluge of false information, while a LLM will became really good at finding knowledge behind mountains of information.
Suppose you ask such a machine to ingest all information everywhere about everything related to the Biden administration. And then you ask the machine to look for lies and contradictions on a given topic. No matter how good the propaganda machine is at forcibly erasing the truth, the lies are there. And your machine can simply point you to them. For an LLM ingesting all the Twitter files, all the painful work Taibbi, Shellenberger and the others have been doing for months will be a piece of cake. Most likely, an LLM will be able to point out connections between people and issues that no one has seen before in the respective files.
This machine has no will, no conscience, and it doesn't understand anything. (For the explanation of why this is true, you must read the long essay). Because of that, it's not a threat to anyone except governments and techno-elites. The only one who risk losing something: power and money.
Knowledge is power, and what LLMs (and, to a certain extent, any AI Generative Content application) are, is fast and effective access to overall human knowledge, immensely increasing our individual capacity to see the whole picture of knowledge, and not just its pieces. And for the first time in history, this power is truly available to the individual.
The main problem governments have is that they became really good at controlling information, but they have no way to control knowledge. A LLM is a machine to extract knowledge from any information set, giving this power to the ordinary individual and talking it out from the hands of the elites. And the elites are freaking out, creating a false "existential" threat as they try to figure out how to prevent this from happening.
To some extent, the war in Ukraine is an opportunity to perfect this. There is only one argument for the escalation of the war that can be seen in all the official propaganda: if Russia wins, civilization in Europe will somehow end. And there is only one view you can see in an amazing number of media in the West: this is a fight for freedom, if Ukraine loses, our freedom ends! The fact that the propaganda machine is not as efficient as it was years ago is explained by the fact that people are not sheep, and thanks to Musk and others, they started to see the blatant lies behind all the scare tactics. But the mechanism is still effective, they're in control, they do what they want, they have no real public opposition.
https://www.google.com/search?client=safari&rls=en&q=dangers+of+AI&ie=UTF-8&oe=UTF-8
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
This are already popping up out there as open-source applications, in fact you may not need to spend any money, except on good hardware.
While it is true that for now a really large language model needs memory and computing power beyond what a desktop computer offers, this will not be the case anymore in a couple of years. Few realize that all Apple chips are built with a lot of neural computing power, or that the servers Open AI uses are commercial grade, accessible to any average company. Given the normal computing power progression, in a couple of years your phone may do what Open AI servers are doing today.
I'm not so sure. LLMs don't have logical reasoning as we understand it, and don't have any grounding in the external world. As such, LLMs are well known to make up plausible nonsense (aka bullshit) at the drop of a hat, and, if you feed it sufficient amounts of nonsense, will generate nonsense back at you.