Writing in a World Flooded by Generation
Programs, blog posts, essays : I write texts. Programs write texts too. They're faster than me, a mere human, but they don't understand what they generate. It's a mix of awe and concern: while they can't reason, they've already polluted the web. Will the same happen with programming?
Content generation programs (GenAI) can produce both traditional writings and code. Their performance is impressive. These tools can be helpful, denying that would be foolish. But they come with hidden costs for users and pose risks to society: AI slop, loss of autonomy, stagnation of skills, and perhaps worse
Writers and Developers stand together
This post is deeply personal.
Writing is both my job and my hobby.
I write data processing programs and design data science algorithms. I also write about these topics, what I enjoy and what frustrates me, somewhere between opinion and technical explanation. Like many professionals who work with words, both sides of my work have been heavily affected by generative AI tools. These programs have stormed into intellectual fields, giving casual observers and decision-makers the illusion that human intelligence can be outsourced. Is it any coincidence that communications and marketing teams now talk about “AI programs” or “AI agents”?
At a certain point in life, you realize that denying reality is usually a bad idea. Whether they write for humans or machines, writers today must reckon with automated generation - and maybe even find a way to work with it.
A Note on Semantics
I’ll use the word texts to refer to traditional writing (articles, fiction, general literature) and source code for programming.
The intended audience for texts has become unclear. Originally written for people, they are now read and used by search engines, then scraped and compiled into databases that are often used to train machine learning models. A shift has taken place. We have to ask ourselves: who are we writing for and why?
AI slop
Various studies suggest that most of the text found online today is no longer written by humans. It’s now generated by software (here and there).
The content might be original, if that word even makes sense in the context of AI generation (spoiler: not really), or it might be machine-translated into other languages. People call it AI slop. Some even go so far as to speak of a Zombie Internet (404), describing the flood of auto-generated text and images overwhelming the web.
So yes, we’re letting these programs rot the internet. Correction: we’re letting people and organizations rot the internet with these programs. What a waste, turning our information ecosystem into digital soup.
Gustave Flaubert Doesn’t Like ChatGPT
Websites bore me. Reports from reputable institutions are becoming unreadable, drowned in the empty phrasing typical of generated content. Even personal blogs are starting to lose their edge. Generated texts are consistently hollow, dull, and flavorless. There’s nothing to chew on. I’ll borrow the words of the brilliant author of Madame Bovary
:
Charles’s conversation was as flat as a sidewalk, and everyone’s ideas paraded through it in their everyday clothes, without sparking emotion, laughter, or dreams. Gustave Flaubert
When the style is bland, the substance is often wrong too. Lately, I’ve caught myself becoming suspicious whenever I read something on a screen. Teachers face the very real challenges of dealing with these tools: are they grading the student, the program, or a bland mash-up of both? Paranoia is in the air.
This Is Not a Bug, This Is a Feature
Are generated texts accurate? Can we distinguish relevant information from half-true or outright false hallucinations? Is it possible to treat an AI agent like an oracle? Can we trust what these systems tell us?
No.
Right now, it’s impossible to tell hallucinations apart from valid information. LLMs hallucinate all the time, and sometimes they just happen to produce something true. It’s a harsh reality, but that’s how these models work. Some people are now trying to constrain these systems more tightly to make their outputs more consistent and limit falsehoods. I wish them lots of courage, patience and drugs.
ChatGPT
:
It’s astonishing how the confident tone [of ChatGPT] lends credibility to all that nonsense. For someone unfamiliar with the topic, it’s nearly impossible to tell that the supposedly factual information isn’t actually relevant or well-researched. Donald Knuth (2022)
Turing Award 1974
Still Worth Writing?
My grandmother used to say, "Try too hard to fit the mold, and you end up a tart". She also believed you shouldn’t fight battles that are lost before they begin. Now I find myself saying the same kind of things to my cat, while my daughter is too busy building forts for her tadpoles.
Resisting the flood of generated content means stepping into spaces that programs haven’t (yet) overrun. Yes, there are still places of relative quiet: reasoning, deep thought, precision. Anything rooted in direct experience of the world. Life, in other words.
Programs don’t have a point of view. Programs don’t come home at night with stories to tell their families or friends. Programs don’t hold opinions. Programs have no subjectivity.
Programs are just f*cking objects.
We, as human writers, can have a perspective. We can form opinions. We can offer a human lens in the face of the AI slop. Let’s step away from the crowd of lukewarm consensus chasers and tame commentators. Let’s treasure original ideas.
That’s one of the reasons I keep this site alive: to tell stories, bits and pieces of things. The other reasons probably fall somewhere between egotism, DIY therapy, and casual self-analysis.
(fun fact: in 2025, conversational agents are regularly used as virtual therapists.)
An Old-School vs. New-School Debate?
Source code is a form of text like any other—with its own vocabulary, grammar, and style.
Programs must be written for people to read, and only incidentally for machines to execute. Abelson & Sussman (1985)
The usefulness of code generation divides developers. Some worship it. Some tolerate it. Others resist or ignore it. It’s far from clear-cut.
Personally, I feel that generated code, even when it works, is often low-quality or disposable... and usually thrown away. I’m still waiting for serious, systematic studies on the actual benefits of AI code generation, especially regarding long-term codebase quality and maintainability. Private conversations with peers suggest that the real-world gains fall well short of the hype.
Code quality matters, a fact most non-developers overlook. Here’s what another heavyweight,
Real quality means making sure that people are proud of the code they write, that they're involved and taking it personally. Linus Torvalds (2008)
Post-Coital Blues
Let me explain my view on code generation more clearly.
Like everyone else, I was impressed when ChatGPT and other LLMs first appeared. They are, without a doubt, remarkable technical feats. I’ve talked about them here and in previous posts. But once the novelty wore off, I quickly grew tired of their consistent errors. Let’s remember: generative models build phrases that sound right but without understanding them. And in my line of work, meaning matters. Not understanding what you write is a deal-breaker.
The widespread and systematic use of generative tools across all intellectual professions has pushed me toward a form of resistance, partly for ethical reasons, partly for personal ones. I simply can’t accept the idea that my work could be reduced to the random generation of symbols.
An Insult and a Mistake
Reducing the work of a developer to the random generation of symbols is both an insult and a mistake. Claiming that a program can be written without thinking is basically treating developers like idiots. And it’s a mistake to believe you can assemble a functional program by stringing together tokens probabilistically, without an overarching plan or any real understanding. The source code that results from that process just won’t be good enough.
The techbros and hype-chasers will probably say I sound like a grumpy old man, so be it.
I don’t feel in competition with these programs. I’ve spent the last twenty years in the tough field of data science, and I seriously doubt a model could replace me effectively. I watch with curiosity as others dive into these tools, often falling for whatever trend is flashing brightest (vibecoding, hybrid systems).
But I sincerely feel for the young developers entering the field today, having to face unfair competition. Let me reassure them: writing code is only part of what it means to build a program. You also have to think about design and task structure. Fred Brooks touched on this challenge in his famous essay No Silver Bullet, and I still stand by his almost 40-year-old words:
I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems. If this is true, building software will always be hard. There is inherently no silver bullet. Frederick Brooks (1986)
Programs Are Just Programs
Generative tools can help developers. These agents suggest code snippets and autocomplete functions, they’re basically glorified search engines or fancy frontends to Stack Overflow. Some people use them as upgraded rubber duck. Is it a revolution? Not really.
ChatGPT
like this:
I find it fascinating that novelists galore have written for decades about scenarios that might occur after a ‘singularity’ in which superintelligent machines exist. But as far as I know, not a single novelist has realized that such a singularity would almost surely be preceded by a world in which machines are 0.01% intelligent (say), and in which millions of real people would be able to interact with them freely at essentially no cost. I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same. Donald Knuth (2022)
While sci-fi authors have long imagined the world after the singularity, none of them foresaw the phase we’re in now, when extremely unintelligent machines interact with us on a massive scale.
By now, you’ve probably guessed where I stand: in the camp of reproducibility, understanding, and reliability. Text generation, as it currently works, produces plausible output fairly well ; but that’s not intelligence. It’s no longer a topic that interests me deeply. I believe we should use generative tools where they truly help and avoid them where they don’t. Everyone must draw their own line. Most of the experienced developers I talk to seem to feel the same.
Generate what you like. Use whatever tools work for you. Make your own choices. But remember: if your job boils down to querying ChatGPT
(or any other agent), you may find yourself replaceable in the next model update. That risk seems very real to me.
Producing is good.
Learning is better.
On The Importance of Choosing Good Titles
This punchline, spotted on Reddit, serves as a good segue into a form of conclusion:
We’ve outsourced our critical thinking to graphics cards running LLMs in the cloud. Is anybody else absolutely SICK of seeing AI slop everywhere?
The current spring of artificial intelligence is not just a passing phase. Winter will come, in one form or another, but the tools will remain - and they will be used. There’s no hope for any real unlearning of these new technologies.
The tech industry has invented powerful new tools. They are now on the path to democratization. Massive information processing tools are effective. Generation systems, while not perfect, perform well. They will certainly have a major societal impact. Our societies must handle automatic content generation and, more broadly, find some smart use for these programs.
Mass-generation for the many?
Original content for the privileged?
Sound nice.
We are still not machines. Our texts are thought out and written without any content generation programs. All financial support is welcome.