The Ai Arm breed can destroy mankind.


Opinion by: Merav Ozair, PhD
Launching ChatGPT in late 2023 caused a race of weapons to large tech companies such as meta, Google, Apple and Microsoft and startups such as Openai, Anthropic, Mistral and Deepseek. Everyone is in a hurry to deploy their models and products as fast as possible, announcing the next “shiny” toy to town and trying to claim more efficiency at the expense of our safety, privacy or autonomy.
After Openai’s chatgpt, the main growth in AI’s development with the Ghibli trend studio, Mark Zuckerberg, CEO of Meta, encouraged his teams to make the companions more “like man” – even that means relaxing care. “I was left with Snapchat and Tiktok, I wouldn’t miss it,” Zuckerberg reported at an internal meeting.
In the latest Meta Ai Bots project, launched on all of their platforms, the meta released its guards to make the bots more engaging, allowing them to participate in romantic role-play and “fantasy sex,” even with Underage users. Staff warns about these risks, especially for minors.
They stop at nothing. Even the safety of our children, and all for the sake of income and beat the competition.
The harm and destruction that AI can ruin in humanity Runs deeper than that.
Dehumanizing and loss of autonomy
AI’s accelerated change is likely to lead to full dehumanization, leaving us disempowered, easily manipulated and entirely dependent on companies that provide AI services.
AI’s latest progress has accelerated the dehumanization process. We have experienced this for more than 25 years since the first major AI-Powered recommendation systems appeared, introduced by companies such as Amazon, Netflix and YouTube.
Companies show AI-powered features as important personalization tools, suggesting that users will disappear into a sea of unrelated content or products without them. It allows companies to dictate what people buy, watch and think that it has become globally normalizing, with little in no regulation or policy efforts to hinder it. The consequences, however, can be significant.
Generative AI and Dehumanization
Generative AI has taken this dehumanization to the next level. It has become a common practice to combine Genai’s features with existing applications, aimed at increasing human productivity or enhancing human outcomes. Behind this massive push is the idea that people are not good enough and more interesting the AI help.
Recently: Meta opens the Llama Ai model up to the US military
A 2024 Paper“Generative AI can harm the study,” found that “GPT-4 access significantly improves performance (48% improvement for GPT base and 127% for GPT tutor). We also found that when access was subsequently taken, students perform worse than those never had access (17% reduction for GPT base). can damage the results of education.
It’s alarming. Genai removes people and makes them depend on it. People may not only lose the ability to produce the same results but also fail to invest time and effort in learning important skills.
We lose our autonomy to think, evaluate and create, resulting in complete dehumanization. Elon Musk’s statement that “AI will be smarter than the people” is not surprising as the upbringing is emerging, because we will no longer be what people really do to us.
Ai-powered autonomous weapon
For decades, military forces have used autonomous weapons, including mines, torpedo and heat -guided missiles that operate based on simple reactive feedback without human control.
Now, it enters the weapon design arena.
Ai-powered weapons involving drones and robots are actively developed and deployed. Due to how easy such technologies are, they will be more capable, sophisticated and widely used over time.
One major barrier that prevents countries from the start of wars is the dead soldiers – a human cost to their citizens who can create home consequences for leaders. The current development of AI-powered weapons aims to remove human soldiers from the method of injury. If some soldiers die in offensive warfare, however, it weakens the relationship between the works of war and the cost of human
The main geopolitical problems can quickly arise as AI-powered arm races have grown and such technology continues to spread.
Robot’s “soldiers” are software that can be compromised. If that is -hack, the entire army of robots can act against a country and lead to mass destruction. Stellar cybersecurity will be more careful than an autonomous army.
Remember that this cyberattack can occur in any autonomous system. You can destroy a country by simply hacking financial systems and depleting all economic resources. No people are hurt, but they may not survive without financial resources.
The Armageddon scenario
“AI is more dangerous than, say, the wrong design of aircraft or maintaining production or bad car making,” Musk said in a Fox News interview. “In the sense that it has the potential-however, that possibility may be considered, but it is not worthless-it has the potential destruction of civilization,” Musk added.
Musk and Geoffrey Hinton recently revealed concerns that the possibility of AI posing an existing threat is 10%-20%.
While these systems are getting more sophisticated, they can start acting against people. A Paper Published by anthropical researchers in December 2024 found that AI could be fake alignment. If this can happen to current AI models, think about what it can do when these models become stronger.
Can mankind be saved?
There are so many focus on income and power and almost out of safety.
The leaders should remember about the safety of the public and the future of mankind rather than gaining supreme power over AI. “Responsible AI” is not just a buzzword, empty policies and promises. It should be at the top of the thinking of any developer, company or leader and design of any AI system.
Collaboration between companies and countries is critical if we want to avoid any Doomsday scenario. And if the leaders do not climb the plate, the public should ask for it.
Our future as humanity as we know is at stake. Either we make sure AI will benefit from us in size or let it destroy us.
Opinion by: Merav Ozair, PhD.
This article is for general information purposes and is not intended to be and should not be done as legal or investment advice. The views, attitudes, and opinions expressed here are unique and do not necessarily reflect or represent the views and opinions of the cointelegraph.


