AI and Mundane Risk
The "missing middle" of the Doomer-Booster Continuum
Dario Amodei is the CEO of Anthropic, the company that created Claude, perhaps the most powerful large language model (LLM) that currently exists. He recently published The Adolescence of Technology, a roughly 20,000 word essay about the existential risks of AI. Amodei’s essay is concerned with a lot of critically important topics that I don’t wish to dismiss, but it ignores a lot of other, critical risks in a way that feels endemic to AI discourse at its highest levels. I endeavor to point this out, and additionally note that all the doomsday scenarios he imagines could potentially be made more likely by “mundane” negative externalities of AI.
AI Boosters
The AI discourse bubble is divided into two camps: the boosters and the doomers. The boosters seem to form a comfortable majority: they’ll tell you that we’ve already cracked the code on Aritifical General Intelligence (AGI), that AI will soon be smarter than the sum of the entire human race, that it’ll be ten times bigger than the Industrial Revolution. Amodei is not immune to this boosterism, though he hedges it more than most. As he wrote in Machines of Loving Grace,1 he believes that AI might soon cure most diseases, lift billions out of poverty, and issue in “a renaissance of liberal democracy and human rights”.
Many of these claims are difficult to take at face value for the obvious reason that most of the individuals making them have a financial stake in the success or failure of AI as a technology. They are CEOs, or lead scientists, or head researchers. The way the public views the advance of AI must be extraordinarily important to them. If you want individuals and companies (and governments) to pay for access to your product, it only makes sense to say that it’s the most revolutionary tech the world has ever seen. Firms and individuals end up either buying in because they think they can use AI to overhaul their operations today, or out of fear of missing the boat and losing a competition with tomorrow’s AI-powered one-person billion-dollar companies.
Tech Industry Boosterism
This kind of boosterism is annoying, but it’s normal business practice - especially in the tech space. One need only to look at a few examples from the founders of Quibi, the short-form video app released in 2020, at the beginning of the COVID-19 pandemic. Quibi was going to reach “verb status”: “What Google is to search, Quibi will be to short-form video.” It was going to be the third pillar of video entertainment: “If we [are] successful, there will [be] the era of movies, the era of television, and the era of Quibi.” And, rest assured, it was going to be successful, no matter what: “Maybe it doesn’t work on day one or week one or month one, but it will work”. For those who don’t remember its glorious reign, Quibi raised nearly $2 billion dollars and survived for barely longer than six months.
Business leaders have an incentive to discuss their products with relentless positivity. They have to cast them as the next big thing. Normal people - those not involved in the industry at any level - know this, and as such usually dismiss the boosters out of hand. For this reason, I’m not too bothered by boosterism2. The risk that a few companies might end up overvalued because business leaders might fall for an enthusastic sales pitch doesn’t frighten me. I don’t like it, but I understand that it has become an essential part of the business strategy for a certain segment of high-value Silicon Valley firms.
AI Doomers
Doomers occupy the other half of the spectrum. They understand that AI is still nascent, but the rate at which it is developing is worth being concerned about. For them, AI carries existential risks to the world as we know it. AI could “defeat” all of humanity. It could lead to human extinction. Maybe the most prominent doomer is Dario Amodei himself3. He seems to have more of a handle on public distrust of AI than any other tech CEO, and similarly appears to understand that running a bleeding-edge technology company comes with a certain degree of social responsibility. He famously left OpenAI to found Anthropic due to safety concerns, and The Adolescence of Technology addresses many of those same concerns in detail.
He warns that AI has the potential to “autonomously threaten humanity”, “kill millions through the misuse of biology”, and that it “creates an intimidating gauntlet that humanity must run”. These are scary possibilities! And certainly if AI ever reaches superintelligence, however we define it, we will have to be careful to steer clear of any actions that will allow these apocalypses to come to pass.
In addition to the discourse coming from thetop, there also exists a thriving AI Safety field dedicated to this kind of existential risk, or “x-risk”. Concerns about x-risk surface in numerous disciplines and industry areas, from philosophy journals, to international law, to legacy journalism. You can read a number of interesting papers on this topic at the Cambridge AI Safety Hub’s Alignment Desk page as well4.
I do not mean to minimize this discipline. It is important. However, Amodei’s near-exclusive (public) focus on this specific kind of risk can come across as insincere to people not immersed in the rapid development of AI technology.
AI Doomerism’s Potential Insincerity
Most readers of this piece will be familiar with the concept of Imposter Syndrome - the fear that you are not smart/qualified/accomplished enough to be working or studying where you are. This is a legitimate feeling, and internally it can cause serious crises of confidence. However, there is another version of Imposter Sydrome - the external manifestation.
For a certain kind of individual, making a big deal of telling others that they are suffering from Imposter Syndrome isn’t a cry for help, but a backhanded brag. By noting that they feel so terribly unqualified compared to where they are, they are actually implicitly drawing attention to their accomplishments. It feels like they’re confiding in you, but in reality they’re pointing a giant foam finger at a neon sign that says “I attend [fancy school]”, or “I work at [famous company]”.
The existential character of AI Doomerism sometimes feels this way to me. By incessantly focusing on the potentially major, humanity-destroying, extinction-level risks that AI poses to the world, they’re pointing the same foam finger at a sign that says, “AI is revolutionary. AI will change the world. AI is the most important technology in human history”. This is especially the case when CEOs are themselves the doomers. It’s in its own way a sales tactic, and it doesn’t warn people away from using AI, but instead attracts them to its potentially revolutionary power.
I don’t have any issue with Amodei choosing to spend some of his time on these problems. All new technologies come with significant risks, and it’s good to have someone worried about them in a formal capacity. There has been good work on this in the AI Safety space. One paper recommends a system that incorporates external teams of auditors analyzing frontier models before release. Certainly I would have preferred there to be more discussion of this sort around, say, the production of nuclear weapons during the development, rather than after. X-risk teams with formal power to slow development of dangerous technologies would be valuable additions to a variety of disciplines.
This is all to say that I absolutely think worrying about x-risk is valuable in the AI space. There are certainly altogether too many boosters in comparison to doomers, and I would personally prefer that the mix be tilted the other way. My issue - and anecdotally, this issue is not unique to me - is the degree to which attention at the highest levels is focused exclusively on existential risk as opposed to comparatively “mundane” risks to every day life and social interaction.
The “Missing Middle”
For me, the most insidious aspect of the relentless focus on existential risk, is that it completely skates over a very real possibility - that interfacing with AI becomes an ever-growing share of our lives, a large variety of now-difficult tasks are automated away, but our human/social/political lives don’t radically transform5.
For lots of people, this is a belief that makes sense - to many, it feels self-evident. If you haven’t been involved in the development of AI technology for years, and haven’t watched its capabilities grow exponentially, you don’t have the same sense of perspective, of upward momentum.

What regular people see is how AI has impacted our lives already. AI bots have been deployed as customer service representatives. AI has already resulted in job losses. The deployment of AI made DuoLingo worse. AI has been used in fast food ordering systems. AI seems to have given some people psychosis, in some cases resulting in death. People are now foregoing human companions to date and marry AI chatbots. AI usage may be linked to a decline in critical thinking skills.6
It’s been argued, quite powerfully, that AI, even at its current level, might contribute to a “Misaligned Culture”, where the addictive nature of the technology, combined with its sycophancy, could accelerate ideological extremity and throw humanity off-balance before it can develop the “antibodies” to resist these negative forces.
These projected developments would be extraordinarily negative, especially in the aggregate, but none of them would be strictly existential. As for the negative externalities so far: our political systems have not been overthrown, our social lives are not fundamentally changed, and we still live and work roughly the same way we used to. But some things are already worse. Some are better, too - for me, at least, it’s a lot easier to do research in foreign languages, and auto-generated transcripts are more accurate and error-free than ever. There are many other positive examples as well.
“Mundane Risk” and Smartphones
The rise of smartphones in daily life is an instructive example of mundane risk. At the time of this writing, smartphones have been around for the past 20 years or so. They are incredibly useful. I’m not a luddite - I use my phone every day for an infinite number of things.
I can’t really read a map. But I don’t have to. I don’t have a portable radio or record player. Doesn’t matter. My partner lives on a different continent, but we can talk wherever or whenever we want. I can check my bank balance at any time. I can practice my language skills. I don’t need to carry around a physical deck of cards to play solitaire. The list of things my phone enables me to do is essentially endless, and it’s not even a recent model.
How then, do we square the advantages that we all get from using our phones with the massive simultanenous worldwide campaigns against them and the platforms they enabled? Authorities are banning smartphone usage in schools in Australia, Afghanistan, China, parts of the United States, and many other countries. Social media platforms, accessed by 99% of users using their mobile devices, are facing bans in a number of countries as well. Half of 16-24 year olds in the UK want to spend less time on their phones, and three quarters want stricter regulation on social media companies.
These sentiments aren’t unique. Anecdotally, basically everyone I know blames smartphones to some degree or other for the specific banalities and indignities of modern life. The rise of smartphones enabled the rise of dating apps, which theoretically made meeting romantic partners easier and more efficient than ever in human history. Yet, 90% of Gen Z users report frustrations. The innovation of smartphone-based food delivery is “killing” restaurant culture and nightlife. Smartphones might be causing a mental health epidemic among young people, worldwide. The rise of smartphones seems to have made students measurably worse at all metrics of academic achievement.
From my perspective, it’s obvious that the rise of smartphones has degraded society to some degree. I was alive before smartphones, and the world has changed in many ways for the worse. Despite all the advantages smartphones provide us, many people in my generation long for the halcyon days of 2005, before they were ever-present. People are still dating, socializing, engaging politically, making and maintaining friendships, sure. But we’re doing it all a little less, and a little worse. This is the kind of mundane risk that AI poses to humanity, if it accelerates these trends - which it appears, from the ground, to be doing.
Mundane Risk and Compounding Factors in AI
These examples of ground-level change in social life are what affect normal people in their day-to-day routines. AI CEOs and AI policymakers seem much less concerned with this type of change than the existential kind. They miss the very real possibility that the risk AI poses to humanity might be mundane rather than existential.7 They also miss that mundane risk is a compounding factor for x-risk.
Holden Karnofsky is another powerful individual in the AI industry who might be classified as a doomer. He was on the Board of Directors at OpenAI between 2017 and 2021 until he resigned. He’s been at Anthropic since January 2025, working on its responsible scaling policies8. He writes about AI on his Substack, “Cold Takes”. In 2022, he published a piece entitled, “AI Could Defeat All Of Us Combined”.
He outlines a very sophisticated argument here about the x-risk of AI, which critically includes a discussion of the risks of AI even if it only reaches near-human levels of intelligence - a segment of the discourse that often gets left out. He imagines a world where these human-like AI systems recruit real human allies, deceive others, research and develop advanced weaponry, and find a way to prevent themselved from being “shut down”. I think this is a very convincing scenario, which doesn’t really on astronomical predictions of superintelligent AIs that may or may not ever come into being.
This argument is helpful for the point I’m about to make about Amodei’s piece - we don’t even need to imagine a world too different from today’s to see how mundane risk makes the world more dangerous.
Amodei and Mundane Risk
In Adolescence of Technology, Amodei identifies five categories of existential risk: Autonomy, Misuse for Destruction, Misuse for Seizing Power, Economic Disruption, and Indirect Effects. The likelihood of each of these x-risk scenarios, and others, are compounded by the various mundane risks associated with AI.
To start with, Autonomy: at least right now, AI has no corporeal form. But, as Karnofsky notes, AI systems could theoretically recruit human allies through “manipulation, deception, [or] blackmail/threats”. This would give them a much greater degree of autonomy. In this case, the mundane risks that AI poses to the economy and its relative level of inequality could heavily contribute to their ability to obtain human allies in this fashion. If AI increases poverty and inequality even slightly, people on the wrong end of those developments might end up more likely to pursue short-term goals at the expense of long-term goals, perhaps by accepting a risky short-term promise of monetary reward from a malicious AI.
Misuse for Destruction and Seizing Power: The comparatively mundane risks that AI poses to our political life in the form of disinformation make extreme ideology and polarization more likely. This could easily lead to greater proliferation of radical groups attempting to use misaligned AI maliciously to their own ends, with potential existential consequences. This is, again, true even if the proliferation of AI throughout society only makes us a little more polarized. Even small or medium changes can compound into greater and greater risks.
Economic Disruption: As Charles Kindleberger notes in his seminal text, Manics, Panics and Crashes: A History of Financial Crises, shocks initially located in just one industry can rapidly spiral into global economic crashes. AI has the potential to, even at current levels, disrupt large industries - as we saw recently when a new AI tool release dropped stocks in legal software firms as much as 11%. Again, economic disruption makes long-term planning more difficult for individuals and governments, and increases the likelihood of risky, short-term options - including collaboration with or an overrreliance on misaligned AI.
Indirect Effects: My point varies very little from Amodei’s here. There are immense risks from industry destabilization and massive productivity gains, and could compound any of the problems noted above.
What to Do?
I’ll take Amodei’s stated belief that the development of AI needs guardrails at face value. If we need to think deeply and prepare ourselves for the existential risk of super-intelligent AI, we also need to prepare for the mundane risk. Mundane risk is worth addressing simply at face value - it is worse for the world for bad things to happen, even if they don’t result in the extinction of humanity. But mundane risk also needs to be addressed for its potential to increase existential risk.
To start, we need government legislation. It took about 20 years for the world to realize the mundane risks of omnipresent smartphones and to start writing sensible legislation regarding them. Just like smartphones, AI usage should be banned in K-12 schools unless it has an explicit educational purpose9. Universities need to retool their plagiarism policies to explicity include AI, and need to be extremely clear about when its use is acceptable.
In addition to this, governments need to implement, or at the very least research, some form of nationalized constitutional or law-following AI. If any given AI has humanity-based red lines that it absolutely cannot or will not break, that could help eliminate certan mundane risks10, and in turn some of Amodei’s worst x-risk scenarios.
AI companies also need to be more transparent. If they’ve identified small or medium risks, they need to communicate these to the public, so that legislators can legislate, and normal people can use AI safely. They need to open themselves up to 3rd-party auditing firms with an objective safety rating scale.
AI companies need to dedicate more time and resources to mundane risk. AI firms often have teams that dream up scenarios wherein AI could destroy the world, but they need to dedicate at least as much time to imagining and dealing with scenarios where AI just makes the world worse, rather than ending it.
Lastly, more attention in general need to be paid to mundane risk. Many people would trade the numerous advantages of cellphones for the inefficiencies and burdens of a phone-free society. If AI accelerates current trends, this nostalgia might grow even stronger. In my mind, humans weren’t designed for a frictionless, efficient world. In the natural world, materials must be hunted, gathered, processed, transformed, and only then used. Perhaps there is value in these inefficiencies and chores, especially for developing minds.
Normal people will not take you seriously if your essay about a technology that you had a hand in inventing is called Machines of Loving Grace
They also might be right! About some things.
He said he is “not a doomer”, but this is perhaps contradicted by the aforementioned novella-length essay about the existential risks of this technology.
Shameless plug.
This is perhaps the most radical claim you can make in the current AI discourse - that the technology might just change our lives a lot, but not fundamentally transform them.
I could add a million more potential risks here: risks to intellectual property, data privacy, the media ecosystem, the political climate, the arts, etc. That I could list so many off the top of my head so easily, and essentially none are brought up in Amodeis’ piece, is the reason I’m writing this.
This is also true of the potential benefits of AI as well - it might improve our lives just a small or medium amount. I think this is one of the more plausible outcomes, especially in the medical field. But that’s not really the scope of this piece.
He is also married to the sister of Dario Amodei, Daniella Amodei - the president of Anthropic. Small world!
I would outright ban it in universities as well, but that argument is somewhat more controversial.
If AI has to follow the law, some degree of economic risk, like that cuased by financial crimes, is taken off the table.




