Blog

87 Deepfake Scam Ring Rings Full Asian in Q1 2025: Bitget Report


The increase in AI technology has also fueled an AI-enabled fraud. In Q1 2025 only, 87 deepfake-driven scam rings were damaged. This stressful statistics, revealed in the 2025 Anti-Scam Month Research Report written by Bitget, Slowmist, and Elliptic, emphasizes the growing risk of the AI-driven scam in the crypto space.

The Report Also announced a 24% year-to-year increase in the global crypto scam losses, reaching a total of $ 4.6 billion in 2024. Nearly 40% of high-value fraud cases involved deep technologies, with scammers who are increasingly using sophisticated impersonations of public figures, founders, and executive platforms.

Distribution of causes for security incidents at 2024 origin: Slowmist

Related: How AI and Deepfakes are walking new cryptocurrency scams

Gracy, CEO of Bitget, told Cointelegraph: “The speed of which scammers can now produce synthetic videos, in conjunction with the viral nature of social media, provides deepfakes of a unique advantage in both reach and belief.”

The defense against scams driven by AI is beyond technology-it requires a major change in mindset. At an age where synthetic media such as deepfakes can convince real people and events. Trust should be carefully earned by transparency, constant vigilance, and strictly verify at each stage.

Deepfakes: an insidious threat to modern crypto scams

The report detailed the anatomy of modern crypto scams, pointing to three dominant categories: AI-generated deepfake impersonations, social engineering schemes, and ponzi-style frauds identified as defi or gameFI projects. Deepfakes are particularly incredible.

AI can imitate text, voice messages, facial expressions, and even actions. For example, fake video endorsements of investment platforms from public figures such as the Prime Minister of Singapore and Elon Musk are tactics used to take advantage of public confidence through Telegram, X, and other social media platforms.

Fake video of Singapore Prime Minister Lee Hsien Loong Source: Lianhe morning newspaper

AI can also imitate real-time reactions, making these scams difficult to identify with reality. Sandeep narwal, co-founder of blockchain platform polygon, Raised alarm in a post of May 13 in xrevealed that evil actors have been dating him through zoom. He noted that many people were in contact with him on Telegram, asking if he was on a zoom call with them and if he asked them to install a script.

Related: AI scammers now pretend to the US government bigwigs, the FBI says

The CEO of Slowmist also issued a warning about Zoom Deepfakes, encouraging people to pay attention to the domain names of the linking links to prevent victims of such scams.

Slowmist CEO posted a warning towards Deepfake Source: @evilcos

New scam threats call for smarter defenses

While AI -powered scams grow more advanced, users and platforms need new techniques to stay safe. Deepfake videos, fake work tests, and phishing links make it more difficult than seeing fraud.

For institutions, regular security training and strong technical defenses are important. Businesses are advised to run phishing simulations, protect email systems, and monitor code for leaks. Building a security-first culture-where employees prove before they trust-is the best way to stop scams before they begin.

Source of scam prevention prevention: anti-scam Report 2025

GRACY offers sunny users a straightforward approach: “Prove, exclude, and slow down.” He even said:

“Always verify information through official websites or trusted social media accounts – not relying on links shared in telegram chats or comments on Twitter.”

He also emphasized the importance of separating risky actions by using separate wallets when exploring new platforms.

Magazine: Baby Boomers worth $ 79t finally riding with Bitcoin