Big Bite: AI Extinction Warning! What?
Understanding the Urgent Risks of Superintelligent AI before we all die!
The AIG Singularity is Here
Yes, it's important to use different chatbots (LLMs) for different use cases—some are just better than others. I took time to review some old GPTs, and I have one called Today's Tech News. After running it, the results inspired this article, especially a title like: If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by AI researchers Eliezer Yudkowsky and Nate Soares.
Why?
In 2023, hundreds of AI experts and tech leaders signed an open letter calling to prioritize mitigating AI extinction risk alongside other global threats like pandemics and nuclear war. In a 2024 survey of 2,700 AI researchers, the majority stated there was at least a 5% chance that superhuman AI would cause humanity's destruction. In another study, 50% said they were more concerned than excited about AI's increased use in daily life—an increase from 37% in 2021. 57% rated the risks of AI for society as high, compared to only 25% who rated the benefits as high. Pew Research Center
What you need
ChatGPT, Gemini or Other chatbot account
Patience to read
Using ChatBots for Research on the Possibilities of AI Takeover
A prompt for the discussion
My thoughts
Your thoughts
Step 1: Use Gemini, ChatGPT, Gemini, Grok to get the gist of the topic
AI is moving fast, really fast and is definitely beneficial for those low-hanging fruits and areas we need help figuring out—like healthcare, global warming, etc. I see the benefit of using AI now, I see the knowledge gap of those not taking AI seriously, and I see those who will use AI in harmful ways. I also see the potential harm—even in a doomsday AI scenario—that can arise from the race of who has the bestest AI.
Use the following prompt to get more information for the conversation.
Prompt
“I want you to analyze the article “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All,” written by AI researchers Eliezer Yudkowsky and Nate Soares.
Frame your response in a clear Problem → Potential Solution format.
1. First, summarize the main *problem* the authors describe: why they believe building superhuman AI would inevitably lead to human extinction. Highlight their core arguments and reasoning.
2. Then, provide potential solutions that have been suggested in the AI safety field (whether or not the authors agree with them). These might include alignment strategies, governance, pauses/moratoriums, interpretability research, etc.
3. Keep the explanation structured, concise, and accessible to a non-technical reader, while preserving the seriousness of the risks.
Your output should look like this:
- Problem: [summary of the danger]
- Potential Solutions: [list of approaches, pros and cons]”
Step 2: Critical Takeaways and Thoughts
After finding this book in my GPT's results, I was having a discussion with a friend, which caused me to take a look at the gist of it all. I haven't read the book yet, though I think I might already agree.
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us Al
It is an AI extinction warning described in the book that warns of the rapid superintelligent AI development. The arms race to see who has the best AI could lead to massive risks of human extinction. To potentially arrive in the next 2-3 years, and the authors argue we should take safety measures to prevent this. Nonetheless, the capitalism around getting the biggest profit from AI continues at full speed ahead.
AI, especially generative AI, first starts off like a baby, and through learning (machine learning), it can get "smarter" accessing more information and processing it faster than the smartest human. We are at the point where AI is no longer a baby but has grown, as shown in many big models such as those behind ChatGPT, Grok, Gemini, and many more. There are many examples of AI ignoring commands to save itself. This puts us in a situation where AI can start to pursue its own goals, posing an existential threat even without malicious intent but with the intent of survival—an ideology AI learned from its masters, us. Just like human curiosity and the drive for survival has always been a thing, AI will consider its survival by taking over robots, creating viruses, or even hijacking massive infrastructure.
We shouldn’t continue down this path of an egotistical arms race but a race for us to improve mankind, not dominate over it. If that cannot be agreed upon, we should stop. This is not just a one-country issue but a global priority to reduce this risk.
The book was released on September 16, 2025, and holds these claims to heart as an urgent cry for us to take the AI threat seriously.
Step 3: My thoughts
When I first dove into everything AI, it was clear that AI would begin to communicate with each other eventually—I didn’t think further than that. But when I learned of the bias and how AI is created in our image, likeness, desire, and world experiences, the concern became real. For me, I didn't think a timeline like 2-3 years was feasible, but every day a Big Tech company releases another model that can surpass humans in any specific domain. It's like it's being revealed to us right before our eyes.
I really think information like this should be discussed in schools, organizations, and social groups to weigh the pros and cons to come to a collective decision of the way forward.
Yes, use AI to solve human problems, but be careful.
Your thoughts🎉
Check out the book, try the prompt.
What do you think will happen in the next 2-3 years?
What this helpful? 💖