Hi everyone,
These days there is no escaping generative AI and talks about its transformative power. Techno-optimists believe generative AI to have the potential to save the world. With a lot of magical thinking, they bet on AI to solve the world’s most pressing problems, particularly global warming. On the other hand, techno-pessimists believe it will destroy our jobs and livelihoods… or even the social fabric of our human societies. Utopia vs dystopia.
But in fact doomsayers and evangelists have a lot in common. Both use preachy, prophetic tones to make grand predictions about the future. Both are arrogant. Both feel a sense of urgency, express intense (often manic) determination, maintain unwavering eye contact with their audience and want to command total attention. Both agree on the immense, exponential, transformative power of generative AI. Both focus on the future rather than the present.
When it comes to the future of work, it’s unclear yet who will win the battle between doomsayers and evangelists. According to my friend Samuel Durand, the author of a docuseries titled AI at Work: Who Runs the Office? that’s just been released (you can watch it online), we should speak about it in the present tense and focus on present questions like the distribution of possible productivity gains, satisfaction and fun at work, ethics…
Generative AI has already impacted the service market (code developing, copywriting and translation, for example). It’s changing the composition of tasks in many jobs. It is benefiting less qualified people, thus empowered to do more. It could also be eliminating entry-level tasks and the lower rungs of a profession’s ladder. In short, a lot has already happened but it’s not inherently a force for good or a force for harm because it’s a little of both and it’s messy.
To be honest I am wary of prophets, whether on the techno-utopian or techno-dystopian side, and I’m not sure the impact on the future of work will be as powerful as they say it will be. What if it was underwhelming? Not negligible but rather disappointing? So far no explosion of productivity has been observed. To paraphrase Robert Solow’s productivity paradox, “you can see generative AI everywhere but in the productivity statistics”.
Do you remember the Web3 prophets and crypto gurus? What happened to their grand predictions? And what about those future of work experts who prophesied that in the future we wouldn’t need drivers anymore and millions of truck drivers would be out of their jobs? A decade later the world suffered a severe shortage of human drivers that slowed down the world’s supply chains.
What if the use of generative AI became a lot more expensive and less accessible in the future? Let’s all calm down and keep a cool head. There's a chance that all of this isn't as exponential as it seems. Here are 4 limits/walls that could seriously slow down the pace of evolution of generative AI.💡👇
#1. Generative AI will run out of energy (and water)
Generative AI doesn’t just require (Nvidia) GPUs (Graphics Processing Units). It also needs a lot of electricity and water. If the growth of generative AI queries followed an exponential curve, at some point, we simply wouldn’t have enough energy to power them and enough water to cool down the new data centres. Even with more energy- and water efficiency, the resources required to power the continued growth of data generation may become limited. The price of using generative AI will grow.
Today estimates of AI's energy usage vary, but these estimates are considered incomplete and untrustworthy due to the variability of machine learning models. Companies like Meta, Microsoft, and OpenAI, which could provide more accurate data, have not shared relevant information in that regard. They have become dangerously secretive about their training processes, making it increasingly difficult to obtain up-to-date estimates of energy usage.
But we do know that training AI models, such as GPT-3, is extremely energy-intensive, consuming significantly more electricity than traditional data centre activities. It’s all the more challenging to gauge the energy consumption of current systems because AI models continue to grow in size.
It’s common knowledge that machine learning consumes a lot of energy. All those AI models powering email summaries, regicidal chatbots, and videos of Homer Simpson singing nu-metal are racking up a hefty server bill measured in megawatts per hour. But no one, it seems — not even the companies behind the tech — can say exactly what the cost is. (The Verge)
If energy use isn’t an issue in the near future, water use is likely to be. It takes millions of liters of water to cool the equipment at data centres. In regions where water availability is already limited, I don't see how that wouldn't be a limit to endless growth.
#2 Generative AI will run out of quality data
Run out of data? Really? Well, to train large language models, they used high-quality data like Wikipedia pages, novels, essays, research studies and the like, the work of human artists, experts and scientists. But this type of quality data does not grow exponentially. With time (in fact we may have already reached that point), generative AIs will have nothing left to feed itself but content produced by other generative AIs, which will lead to lower and lower quality of output. If you dilute and distort good content, you end up with shittier and shittier content.
Worse yet, as scientists, experts and artists use generative AI, their own output may lower in quality in the future. Some research papers are already filled with GPT-generated paragraphs, produced using data from dubious sources. Many media articles published online are now also copy-pasted from AI apps. There’s undoubtedly more and more content, but not more quality. The more content there is, the more we’ll ask AIs to digest it for us, leading to a vicious cycle. Piles of shit upon piles of shit on the internet…
Data contamination / data poisoning seems unstoppable. Not only will the so called hallucinations not disappear, they may even spread! I’ve recently come across the phrase “AI inbreeding” to describe the phenomenon:
The term refers to the way in which generative AI systems are trained. The earliest large language models (LLMs), were trained on massive quantities of text, visual and audio content, typically scraped from the internet. We’re talking about books, articles, artworks, and other content available online – content that was, by and large, created by humans.
Now, however, we have a plethora of generative AI tools flooding the internet with AI-generated content – from blog posts and news articles, to AI artwork. This means that future AI tools will be trained on datasets that contain more and more AI-generated content. Content that isn’t created by humans, but simulates human output. And as new systems learn from this simulated content, and create their own content based on it, the risk is that content will become progressively worse. Like taking a photocopy of a photocopy of a photocopy.It’s not dissimilar to human or livestock inbreeding, then. The “gene pool” – in this case, the content used to train generative AI systems – becomes less diverse. Less interesting. More distorted. Less representative of actual human content. (Forbes)
A Cornell university study explains that inbreeding will lead to less effective generative AI because “without enough fresh real data in each generation … future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease.”
#3 AI firms will face more and more copyright lawsuits and legal costs
To train their models, AI firms have used a lot of books, art, studies and ideas made by humans without asking for permission. And now those people whose work was swallowed by LLMs aren’t happy about it. The New Republic titled one of its articles, “Silicon Valley’s Big A.I. Dreams Are Headed for a Copyright Crash”. The AI industry is facing more and more legal disputes in the near future.
Led by major players like Microsoft-backed OpenAI, Meta Platforms, and others, the proliferation of AI models has sparked concerns among writers, artists, and copyright holders who argue that AI's progress relies heavily on their original works. Prominent figures such as bestselling authors John Grisham and George R.R. Martin, comedians like Sarah Silverman, and institutions like Getty Images and the New York Times have initiated lawsuits over the unauthorized use of their content in AI training.
Some lawsuits have already been partially dismissed. But the legal battles have only just begun. There will be more. They will keep a lot of lawyers very busy for many years. And they will cost hundreds of millions of dollars. Tech companies have already mobilised formidable legal teams to defend their practices. They argue that AI training qualifies as "fair use" under copyright law and they liken it to human learning processes, but ongoing cases could set crucial precedents for AI copyright litigation and shape the industry's future trajectory.
The fundamental goal of A.I. is to reap the benefits of creative or intellectual labor without having to pay a human being—writers, artists, musicians, lawyers, journalists, architects, and so on—to perform it. A.I. developers, in other words, seek to create something from nothing. But that is not how the laws of thermodynamics work. And unless the courts and federal regulators suddenly embrace the tech industry’s novel new theory of fair use, it will not be how the laws of copyright work either. (The New Republic)
#4 AI still faces a diversity crisis
Artificial intelligence (AI) is facing a diversity crisis. If it isn't addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people. (Nature)
The lack of diversity in AI, particularly the gender imbalance, is a major problem. It affects the development of future AI technologies and their impact on society. So many perspectives from different backgrounds, experiences, and genders are still missing from the process! This leads to biased algorithms which discriminate against women or perpetuate harmful stereotypes. To give AI systems a more promising future, diverse teams will have to consider broader ranges of use cases.
If AI systems continue to be developed primarily by homogeneous teams, they will exacerbate existing inequalities and biases in society. If non-diverse AI systems increasingly shape various aspects of our lives, from healthcare and finance to education and employment, it will create a lot of opposition and resentment.
AI’s relevance and efficiency is already distributed unevenly. For example, healthcare AI systems trained on data that predominantly represents one demographic group produce inaccurate diagnoses or treatments for other groups. Another example: biased AI algorithms used in hiring processes perpetuate gender or racial disparities in employment opportunities. Last but not least, as AI becomes more integrated into public systems (even justice), the consequences of biased or flawed AI decisions could lead to more safety hazards or miscarriages of justice.
In short, not only is generative AI unlikely to continue to progress exponentially, but it could have major, costly externalities that could make its use less and less popular, more and more expensive and less and less relevant. So let’s keep a cool head, speak about it in the present tense and beware of gurus to try and sort out present-day questions and make sure we make it inclusive and sustainable.
🚀 📣 Caroline Taconet, Katerina Zekopoulos and I have released 2 new episodes of our podcast Vieilles en puissance, at the intersection of three themes: age, money, and women (in French).
The 3rd episode, with Héloïse Bolle, is about economic violence 🎧
The fourth one, with Sarah Zitouni, is titled “Comment donner du pouvoir à son moi futur quand on est salariée”👇
👉 SUBSCRIBE NOW TO THE VIEILLES EN PUISSANCE NEWSLETTER!
💡Check out the latest articles I wrote for Welcome to the Jungle: How women over 50 are reinventing their careers and the future of work, Ego depletion: The more decisions you make, the worse they become! (in English), JO : 5 leçons de grands champions français applicables au monde de l'entreprise, Santé des femmes au travail : différencier n’est pas discriminer !, Pourquoi les femmes seniors disparaissent des entreprises (et pourquoi l'éviter) (in French).
📹 Welcome to the Jungle released a new CORTEX video about homophily at work (in 🇫🇷)
Miscellaneous
😡 How did customer service get so bad?, Claer Barrett, Financial Times, April 2024: “If firms want to cut costs by using chatbots or other technology to filter out simpler queries, the staff they do employ need to be capable of solving increasingly complex issues. After wasting time exhausting a chatbot’s doom loop of questions, the sight of three flashing dots usually signifies that your problem has been shunted up the chain to an actual person. Yet how often do you get through to this person only to find that they stick rigidly to a script?”
🗣️ I stopped apologising for my poor German, and something wonderful happened, Ying Reinhardt, The Guardian, April 2024: “Learning and speaking German was anything but funny. It wasn’t funny when I started learning the language from scratch and it still wasn’t funny when I finished C1, a level that allows me to study at a German university if I want to. When I was learning Italian or French, the words would somehow roll off my tongue, but in German the convoluted grammar made me choke. Even if I could technically write academic essays in German, the thought of calling a clinic to make an appointment would still induce debilitating anxiety. I would stammer during small talk with a mother I had never met before (…) hide if I saw my neighbour take out the trash; or get my husband to call the ophthalmologist for an appointment.”
Beware of gurus and prophets!