AIM event: Marseille, first in generative systems?

I knew I wasn't going to like this thing. But the poster was nice, and as Jacques Attali and Thierry Breton were coming, I was going to make an effort. I'm not sure we'll make many friends.

« À jamais les premiers ! » Olympique de Marseille (1993)

Pun intended

I attended the Artificial Intelligence Marseille (AIM) which took place on 24 November 2023 in Marseille (France). We are french actually, I am quite sure you spotted this lovely accent. La Tribune organised this raoult, without Didier (french joke). La Provence gives you a summary.

There's a double pun with IAM and to aim. If I'd seen the former, I hadn't understood the latter. I liked the font they used on the website, which is one of the reasons I decided to go, along with the presence of Jacques Attali and Thierry Breton - they are famous guys over here.

AIM was actually offering three exhibitions for the price of one. One is a matter of personal preference.

1 A show dedicated to the great days of football and artificial intelligence.

2 A show at the Velodrome, the temple of French football, where the vast field of artificial intelligence was violently summed up in a catch-all term: generative AI.

3 A show where a motley collection of speakers all pretended to have mastered all the subjects related to a very recent technology, poorly understood and with as yet unknown consequences.

Is it all the same?
Nothing to see.

I insist a lot on football?

Wrong. Compared to the introductory speech, it's even timid. The President of the Region (Renaud Muselier) used the metaphor of football and AI. Did he have anything else to say? Probably not. Anyway, why not. It's Marseille, baby. At least they spared us Jul and his organised gang, that's quite an achievement.

I feel a bit sorry for the speakers

And there were a lot of them, from large, often multinational, companies. There were a few start-ups lost in the course, including one who dared to talk about the symbolic approach to AI and Prolog - well done David Cortés. Seven of us understood the local reference.

The list of speakers is on the website. The watchword was very clear: everyone had mastered generative artificial intelligence, from concepts to uses. Summoned to sing the praises of this technology and the eternal glory of their employers, what other choice did they have? Probably none. AI is first and foremost a communications issue for companies. Proof that they weren't born yesterday, they often reminded us that AI didn't start in November 2022 with ChatGPT. AI has a much longer history, dating back to the arrival of deep learning 10 years ago. It's well known. Who still doesn't know?

You missed the point

The modern history of AI begins in 1956.

While deep learning did indeed emerge from the secrecy of specialised laboratories around 2015 with AlphaGO, it is quite wrong to sum up AI in terms of this type of machine learning algorithm.

Two references for the curious reader: AlexNet (2012) and a review by LeCun et al in Nature. The general public discovered the term at the end of 2015 with the victory of DeepMind's AlphaGo programme, a London-based nugget that had recently been acquired by Google.

I'm well aware that this kind of consideration is purely incidental. They're only of interest to the nerds or to a few naysayers.

The new Golden Calf

Because the revolution, my sweet friend, is obviously generative AI. It will change our relationship with technology. To education. To work. To knowledge. To everything. The silver bullet, at last! It's here. We love it already.

Admittedly, this generative AI is also a little unsettling. It seems to be attacking intellectual professions. And that's precisely what a large part of the audience is doing, made up of over 200 people, the majority of whom don't exactly look like developers or data scientists. Of course, clothes don't make the man, so let's not generalise.

So, not even afraid of generative AI?

On the face of it, no. Instead, we should be exploiting its advantages to 'do more' and 'do better'.

But deep down, of course we are.
We're all scared.

Especially as no one understands a single word about these content-generating AI systems and LLMs. And that's normal, because you don't have all day to keep up with innovative (or not), revolutionary (rarely) and complex (always) new technologies. I simply refer you to the Guide de Survie en Milieu Hostile that we recently wrote - only in french, sorry about that guys.

Skippy

Éric Sadin has spoken. I didn't know him. Since I was subjected to Laurent Alexandre's book, I rarely listen to media chatterboxes and I read them even less. I hear he's appreciated. Strange. Beneath his prophetic airs and alarmist rhetoric, this Cassandra with his very personal look made me smile. Let's leave him in his very personal dark world.

What's it for?

A classic use case for these famous generative AIs is to use them to query internal databases. At first I was amused, then intrigued. I'll quote what an executive at La Poste had to say, in my own words: "our internal knowledge bases are really massive and we use these systems internally to give everyone access to the information". AltGR has received this kind of request several times, so I can confirm that the subject is up in the air.

The idea is to use this kind of automatic system to facilitate information retrieval. It amounts to creating an electronic document management system (EDMS). The idea is appealing as long as one only scratches the surface. As soon as one digs a little deeper, the enthusiasm fades. The operation of current generative systems involves composing responses based on the analysis of a pre-existing knowledge base. Composing responses actually amounts to stringing words together, without any reflection or understanding of concepts, assuming that a probability calculation will suffice to create meaning. Presented this way, the idea seems questionable. The nature of these systems is to be able to cleverly invent outputs, without worrying about being perfectly in line with the learning bases. These inventions are called hallucinations. Who seriously wants to get solid answers on serious topics 80% of the time? And that's an optimistic estimate.

To be honest, some recent Turing Award winners, including Geoffrey Hinton, believe that these systems truly understand what they are writing. Yann LeCun (Meta) is less definitive but still confident (xweet):

« LLMs obviously have some understanding of what they read and generate ». Yann LeCun

Delusion or wisdom?

I haven't devoted my life to studying deep learning algorithms, and I wasn't paid by Google or Meta for many years. Some of the subtleties of intelligence modelling escape me.

So, delusion or wisdom?
Time will tell.

Be that as it may, if we have to get humans to proofread programme output to detect hallucinations, and if the horizon for young graduates with 5 years' higher education is to confine themselves to these exciting tasks, I wish employers all the best.

And the others?

Jacques Attali was good, a bit sombre - tired? I liked his speech. He preferred to talk about artificial predictions instead of artificial intelligence. That reduces the scope, but it frames the debate. Thierry Breton was in good form, driven by the public interest. Watch out, GAFAM and other lobbies!

Artificial intelligence is the future. For 25 years, our European Commissioner even bet. He's not wrong: AI is by definition about the future, since it's a frontier between what we know how to do (=there's an algorithm whose operation is clear) and what we don't really know how to do yet (=the rest).

In any case, it was well organised. There were little croissants, which was nice.

Frankly, it's very disturbing to hear all this talk. I'm not making fun of people, who are doing what they can with these difficult tools and technologies, but I do like to make fun of organisations and social games. We're right in the middle of it. Who has the best AI? Who has the biggest database? The playground is never far away.

Somehow it leaves me dumbfounded.


Thomas