From MeasureCamp Paris, June 14, 2025
So I walked into a packed room for my second session on AI. Apparently, “reason #3 will surprise you!” worked as a subtle clickbait hook on my session card. No pressure!
I did not build a slide deck for once so I‘m sharing a write-up of the session below.
For what it’s worth, this article was written entirely by yours truly without any AI assistance.
Idiocracy Was A Documentary

For some general context, we’re seeing a widespread decline in IQ growth (average IQ usually sits between 90 and 100). So it’s not so much that we’re becoming dumber as a species but, in reality, humanity no longer has predators or immediate dangers — except for the problems it creates for itself (overpopulation, wars, environment, etm.).
Speaking of overpopulation, for those of you who watched Idiocracy, we observe that birth rates are higher among couples with lower IQs compared to educated and intelligent couples. Where intelligence was once a desirable trait in a partner, we’re realizing that physical attributes are taking over as seduction criteria.
Public perception of intelligence is now skewed, starting in school playgrounds, where the word “nerd” is thrown around as an insult. It’s sad but, unfortunately, it’s also a sign of these troubled times.
Humanity no longer “needs” to develop its IQ; it now relies on cognitive crutches such as social media and, more recently, “artificial intelligence.”
Is it AI or not?
As mentioned in my opening keynote on artificial intelligence at Superweek 2024 , and to put things in context, we can define “artificial intelligence” as the way to automate tasks that require human intelligence. A digital version of automotive production lines or <insert your industrial automation here>.

We all talk about AI but in the vast majority of cases, we’re dealing with machine learning or deep learning, in which we feed/train an algorithmic model with various and diverse data.
Through massive proximity calculations (usually K-Near) between elements, we are able to “predict” which element will come next. By expanding the process, we end up reproducing text, sound, or images. This is what we’re dealing with when we talk about generative AI or gen-AI.
The use of generative AI therefore relies on a massive amount of data to guesstimate a version of what a prompt is asking for. This means that, besides the problems related to ownership, exploitation, and cleanliness of data used for model training, the relevance of results is conditioned by the relevance, completeness, and recency of training data. And obviously this means that platforms like ChatGPT will have to stop siphoning content from other sites or other AI platforms.
In this brave new world where most content is just “evergreen” recycling or AI-assisted generation, the creation of new content by ChatGPT and sundry can look like well-formatted gibberish: no coherence, erroneous or obsolete data, etc.
Stagnation or innovation?
This is the moment in my discussion where I draw a parallel with Cixin Liu’s book, The Three Body Problem, in which extraterrestrials — to conquer Earth — prevent the scientific and technological development of earthlings who find themselves developing a defense strategy against an interplanetary race with a technology and research level from 2020.
When we bring this back to AI usage that can only learn from an existing corpus of text, images, or sound, this only reinforces my previous point about data relevance and freshness: innovation cannot happen based solely on past data.
This raises real questions about how humans will evolve as a species if our innovation and creativity is outsourced to an algorithm through intellectual laziness disguised as convenience.
Kittens on skis, with hot chocolate
We get to the part of the debate that deals with generating photorealistic images, using “kittens on skis, with hot chocolate” (gross and deliberate exaggeration) as an example.

We’ve seen similar images everywhere: on television, on social media, and in pretty much all media.
More recently, political AI-generated material has cropped up on Facebook, Twitter, and other outlets. We have lost count of the number of conspiracy and disinformation posts that feature political leaders in compromising situations, in obviously photoshopped or generated photos.
This problem is crucial because this phenomenon contributes to eroding our democracy by flooding media and secondary information channels (such as social media) with massive disinformation.
We are also now receiving robocalls from AI chatbots selling us something while pretending to be human.
Generative AI creates a distortion of reality and influences the perception of people who don’t know how to tell the difference between a real image and a generated image.
Can One Even Face This Tsunami?
The use of this image generation technique discredits historically trusted information sources and references: mainstream media, traditional information channels, as well as governmental entities.

Thanks to Brandolini’s law (aka the bullshit asymmetry principle), we know that the energy and effort needed to debunk a falsehood is several orders of magnitude greater than the energy needed to support a truth — or to create/generate disinformation. And thanks to AI, we can industrialize fake news creation.
Clearly, we’ve moved beyond the stage of taking responsibility. That’s why we need more regulation on how AI is used, even though in Europe we’ve already taken the first steps with the AI Act.
We also need an education policy to be put in place to raise awareness, from an early age, for children to moderate their access to social media despite current habits and learn to distinguish an artificial image from a real image. On this last point, our older fellow citizens must also benefit from this kind of training.
Finally, we need to teach everyone to know how to verify information by searching for contradictory or confirmatory sources.
AI Is Not Sustainable
This MeasureCamp session was very interactive and questions started being asked about the sustainability of AI practice. If we’re concerned about carbon footprint, we can start looking at cloud service CO2 consumption levels. During the “Ghibli style” or “starter pack” trend, we started seeing impact estimates (in terms of water consumption) of an AI prompt just to create a blister-packed figurine illustration.
Generative AI can be useful, fun, but clearly it has an impact on our resources.
Not the Time to Get Depressed
Let’s not kid ourselves: the conclusions of this debate were quite negative about our prospects as a species, especially because of:
- overpopulation,
- declining IQ,
- innovation stagnation,
- resource overuse,
- digital divide,
- environmental impact
Just like any other tool, AI must be used sparingly and with clear judgment. It requires very particular support and education on ethical, legal, sustainable, and educational aspects of AI.
Obviously, a 30-minute debate was not going to save the world, but I like to think that participants felt invested with a mission of evangelization on the impact of AI.
The discussions continued over drinks but need not stop there. If you’re reading this, I urge you to take an interest in the subject and get involved in education and evangelization efforts.
See you soon and see you next year for MeasureCamp, in Paris or elsewhere! Let me know your comments here!

French