The Great Artificial Fear
The recent launch of several AI tools that can generate impressive images and surprisingly realistic dialogue has heated the fantasy of many people. Even smart and educated individuals are amazed by the average-at-best answers that these tools provide, and Elon Musk has even expressed concern about the possibility that machines may soon take over:
However, it appears that many people have forgotten what they are talking about. One notable example is Chris Anderson, the head of TED, who became ecstatic in front of a mediocre, corporate BS style answer:
But wait, what is “intelligence”?
According to Merriam-Webster, "intelligence" is defined as:
(1) the ability to learn or understand or to deal with new or trying situations, or the skilled use of reason;
(2) the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests).
What we currently refer to as "Artificial Intelligence" is only a small part of this broader definition of intelligence. The software programs that are used for AI are very good at pattern recognition (we have patterns not only in images, but also in virtually any type of language), and what they do is apply a set of initial rules of "learning" to large amounts of "training material" provided by researchers, using feedback from those researchers to improve their performance. As ChatGPT itself explains:
(In fact, there is a specific term, Artificial General Intelligence or AGI, for what a machine would need to display in order to exhibit true human-like intelligence.)
Imitating Dostoevsky
The machines are best at analyzing this training text very quickly, which is their only superpower. They can probably process millions of pages of text in just a few days, looking for patterns, while a human would not be able to do so in their lifetime. If a chatbot like ChatGPT were trained on all of Dostoevsky's books, it would be able to extract a number of patterns regarding the choice of words, phrase structure, and tone of voice. This would allow it to generate replies in Dostoevsky's style and manner. It would also know the characters, their flaws and gifts, and their mannerisms, so it could respond in their voice. However, this does not mean that it could write an entire novel that most people would not be able to tell was not written by Dostoevsky himself.
How to crash a Tesla
This is because this type of learning is based on imitation, which is the first, unconscious level of intelligence that we see in animals and babies. It does not involve actual thinking. In fact, if the autonomous driving software of Tesla had only been trained in the UK, it would likely crash immediately upon encountering the different rules and patterns of the road in the US (assuming that the engineers did not account for this in the code, just as human brains do not come pre-loaded with this type of information). The amazing thing is that humans does not crash because they are able to think and adapt to radically new situations, which is a key aspect of true intelligence.
The difference between form and content
Therefore, the real limitation of AI is that learning based on pattern recognition is not the same as thinking. It may look like thinking and sound like thinking, but it is not. This is before even considering the ability to create something new, which cannot be learned by imitation or example.
Nassim Taleb put it this way:
"Verbalism" may be (Taleb is a bit vague on this) another term for "empty words," or form without substance, which is the main flaw of many current AI applications when they are asked about subjects they have no real knowledge about. It is important to recognize that the potential for what AI can do today is vast, and many repetitive tasks can and will be taken over by machines. This includes tasks such as driving, debugging code, and formatting text. In fact, a properly trained AI could potentially write a script for a three-season television series based on the Karamazov Brothers that is better than one produced by any human writer alive. However, it is important to recognize that AI is merely a tool that can be shaped to fit a variety of fields of knowledge, but it is not capable of anything more than that.
The hidden dangers
As for the dangers of AI, one of the biggest concerns is who selects the training sets. For example, imagine if the next Google AI that answers questions on the internet (it will come!) was trained only on the texts of the New York Times, Wall Street Journal, Washington Post, and other left-leaning experts from the past decade. In this case, the AI would only be able to provide answers that reflect the biases and perspectives of those sources. Similarly, if OpenAI were to be taken over by a team of undercover political operatives (Like Twitter seems it was), the "machine oracle" that is capable of answering any question could become a powerful tool for spreading propaganda. The problem has always been and will always be the people operating the machines, not the machines themselves. Therefore, the fear that AI will take over everything from us is unfounded, just as it was 100 years ago. Fear the evil people behind, not the machines :)
PS: my English is average, at best, mainly because I rarely write in English. So that I used ChatGPT to re-write my texts in proper English. It sounds a bit dull to my ears but it took less than 20 seconds and this is a great example of the real value of the AI. In fact one of the spectacular things AI will do is to demolish *any* language barrier.