There is a form of blindness, or obsession.

The blindness imposed by new technologies is formidable. While it may be difficult to escape, it is important to be aware of it. I dare to speak of brainwashing.

Between Big Data, artificial intelligence, deep learning, and now LLMs, we believe we are living in an epic era, even a revolution. I do not think so. The unbridled communication around artificial intelligence blinds us. It forces us into a predetermined framework, making us reason with false assumptions.

Machine learning blinds us.

The framework that has been imposed on the economy is that of machine learning. It stipulates that it is enough to write programs capable of independently determining solutions to a given problem through complex statistical calculations. The details of these calculations are entirely obscure to the user. This framework is relevant in a few cases that I would classify as pathological or atypical; for example, computer vision when millions of annotated examples are available, or high-dimensional object classification.

Machine learning algorithms have existed since the early work on artificial intelligence (1960s). Major digital players (GAFAM, BATX, etc) have been using them for about twenty years to efficiently process the massive amounts of data they store. This trend started with Big Data in the early 2010s.

Machine learning blinds us. These algorithms exploit an interesting and useful capability, namely the ability to identify patterns and regularities automatically and systematically without having to specify them. The slide is immediate: is human intelligence no longer necessary to solve problems related to massive data processing? The economic world has enthusiastically slid down this path. It's a mistake.

Large language models blind us even more.

Deep learning is part of the machine learning toolkit. This family of algorithms is extremely efficient. Neural networks are old, but recent advancements coupled with efficient implementations — including the use of GPU computing power — have resulted in remarkable performance. Applied to language processing techniques, neural networks have enabled the creation of a new technology: large language models (LLMs). This recent technology (2020s) is useful for massive text corpus processing, another category of pathological cases.

OpenAI developed a conversational agent that intelligently leverages the power of LLMs, and then made it available to the public in 2022. ChatGTP stunned the economic world. The "wow" effect of LLMs blinds us a second time. The genius of generative artificial intelligence (genAI) is out of the bottle.

The Siren's Song.

Language models manipulate text with ease and efficiency. They give the impression of understanding what they generate. The initial shift in meaning proposed by machine learning intensifies. LLMs suggest that human intelligence is no longer necessary to perform an ever-increasing number of traditional intellectual tasks. This is another mistake.

One might quickly conclude that there is a paradigm shift, even a scientific revolution. I do not believe so.

« Any sufficiently advanced technology is indistinguishable from magic. » Arthur C. Clarke, (Science, 1968)

It seems that it takes a strong critical mind not to succumb to the siren's song and follow them blindly.

The backlash will be cruel.


Thomas