ChatGPT : how to think after the hapax
Artificial intelligence (AI) in 2025 has been designed for a world where energy is abundant and inexpensive. In the context of resource scarcity and the necessary reduction of energy consumption, such technologies are no longer sustenable. Major scientific advances are required. It is time to explore new pathways in mathematics, algorithms, and cognitive sciences.
A hapax is a word that occurs only once in a given corpus. By analogy, it can also refer to an event that happens only once in an individual’s lifetime.
The disruption caused by ChatGPT
and OpenAI
has been extraordinary. This
technological hapax has profoundly shaken digital innovation and the use of
AI. Funding has trickled down as rarely before to more or less overt clones of
OpenAI
. Now that this period of intense excitement has passed, we must think
about what comes next.
Envisioning the Future: Reasonable and Unreasonable Expectations
It is reasonable to anticipate an intensification in the use of
content-generation tools (text and image) across the economy, industry, and
public or private organisations. The popularity of generative tools offered by
OpenAI
, Mistral
, and their competitors will not diminish. There will
be no turning back.
It is not reasonable to expect a significant improvement in the performance of
generative tools. These tools rely on large language models (LLMs
) trained on
massive text corpora. LLMs
use deep learning, a well-established yet complex
machine learning technology. The performance of such systems is almost linearly
tied to the available computing power and the size of training data.
It is also unreasonable to expect significant advancements in computing power or a proliferation of available data. Moore’s Law might be hitting a wall, with experts in microprocessor development questioning its short-term validity. No groundbreaking technological advances in microprocessors are expected.
It is unreasonable to anticipate the sudden arrival of quantum computing. Although promising in theory, quantum computers remain largely experimental, with real-world applications still limited particularly outside cryptography. It is worth remembering that just because a technology is poorly understood does not necessarily mean it will be useful or disruptive.
Given these constraints, it is reasonable to expect an increase in the energy consumption of generative systems. Computing power remains the primary lever to ensure performance improvements, echoing the bitter lesson of AI progress over the past 70 years. As a result, the consumption of raw materials for processors and other electronic devices will continue.
Furthermore, it is reasonable to predict that companies and organisations will increase their spending on generative systems. Rising computational demands will drive up costs until economic viability limits are reached, pushing the refinement and adaptation of generative systems towards niche applications with higher returns on investment.
Ten Years of Deep Learning
The bubble began around 2016 with AlphaGo
's victory over Lee Sedol. Interested
french readers should take the time to read this history of
AI. The only recent major technological breakthrough concerns
the ability of neural networks to make efficient use of massive amounts of data
by taking advantage of the computing power at their disposal. This has neural
networks and the gradual improvement in their performance. performance. Image
processing, the identification of patterns in large volumes of data and content
generation are some of the noisy uses of this technology.
Some argue that mechanisms like attention or certain types of neural networks have been revolutionary. While technically impressive, these advancements are engineering innovations rather than fundamental theoretical breakthroughs. The core methodology remains unchanged: accumulate data, feed it into neural networks, and tweak their architecture slightly. The foundational theoretical building blocks have existed since the early 2000s, if not earlier.
These damned neural networks remain a mystery
The AI of 2025 is designed for a world where energy is abundant and cheap. cheap. All the processes involved in the field of artificial intelligence are today is now almost linearly dependent on energy and matter. This is historically counterintuitive.
Given current funding levels and the unchallenged domination of the connectionist approach, it is unreasonable to expect decisive progress in machine learning theory and the conceptualisation of knowledge. This is a classic situation in science. Sometimes theories cannibalise all the activity in a field, like string theory in fundamental physics. This fad effect stifles creativity; it's harmful. It is even at work today in artificial intelligence: deep learning, large language models and the associated artificial intelligence systems are cannibalising thinking, expertise and budgets.
Toward 2040: A New Paradigm
Ecological imperatives are clear. Future AI systems must meet two conditions: maintain or improve their performance while reducing energy and raw material consumption. Current technology, including neural networks, is insufficient for this challenge.
Understanding learning mechanisms, even at a basic level, is urgent. The black boxes must be opened.
Major theoretical advances are needed, potentially drawing from breakthroughs in knowledge representation, cognitive science, and mathematics. Progress depends on exploring alternative theoretical pathways beyond the dominant connectionist paradigm. Building for tomorrow means exploring theoretical byways.
Breaking the black box of learning will have staggering consequences
This is not a business plan. We're not the kind of people who can convince investors with a few images, tables of numbers and an optimism that nothing can shake. We are technicians and our fuel is doubt, criticism and constant questioning. Those looking for simple answers or oracles will not be happy in our company.
One point is clear, however: any learning technology that does not operate in a black box and that can be scaled up must be very seriously studied, investigated and therefore financed. The returns on investment are bound to be substantial.
text translated from french then proofread by the author