Blog

Criminals are vibe hacking with AI to perform ransoms in scale: Anthropic


Despite the “sophisticated” guardrails, AI infrastructure firm Anthropic said the cybercriminals are still finding ways to use AI chatbot claude to perform large cyberattacks.

In a “intelligence threat” report released on Wednesday, members of the Anthropic threat team, including Alex Moix, Ken Lebedev and Jacob Klein shared several cases where criminals abused Claude Chatbot, with some attacks demanding more than $ 500,000 in the ransom.

They found that chatbot was used not only to provide technical advice to criminals, but also to directly conduct hacks on their behalf through “Vibe Hacking,” which allowed them to conduct attacks with basic knowledge of coding and discretion.

In February, the blockchain security firm chainalysis that -forecast crypto scam may have the biggest year In 2025 as Generative AI it was made more scalable and reaching for attacks.

Anthropic Found A hacker who has become a “vibe hacking” with Claude to steal sensitive data from at least 17 organizations – including health care, emergency services, government and religious institutions – with redemption demands from $ 75,000 to $ 500,000 in Bitcoin.

A simulated ransom note shows how claude of the leverage of cybercriminals to make threats. Source: Anthropic

The hacker trained Claude to assess stolen financial records, calculate the appropriate amount of ransom and write custom ransom records to maximize psychological pressure.

While Anthropic banned the attack, the incident reflects how AI makes it easier for even the most basic level of coders to conduct cybercrimes in a “unimaginable degree.”

“Indigenous actors implement basic encryption or understand syscall mechanics successfully create ransomware with prevention capabilities (and) implementing anti-examination techniques.”

North Korean workers also used Claude of anthropic

Anthropic also found that North Korea workers that IT use Claude to convince convincing identities, pass technical coding tests, and even safe papers in the US Fortune 500 tech companies. They also used Claude to prepare interview responses for those roles.

Claude was also used to perform technical work that was once rented, Anthropic said, noting that working schemes were designed to be a funnel revenue in the North Korean regime despite international penalties.

The collapse of Claude -powered activities was used by IT workers. Source: Anthropic

Earlier this month, a North Korean IT worker are counter-hacked, where found that a team of six distributed at least 31 fake identities, got everything from Government IDs and phone numbers on purchasing accounts at LinkedIn and Upwork to mask their true identity and crypto jobs.

Related: Telegram founder Pavel Durov said the case is nowhere, Slams French Gov

One of the workers who is said to be interviewed for a full position of the engineer’s engineer in the polygon labs, while other evidence has shown interview responses in which they claim to have experienced experience in NFT Marketplace Openea and blockchain Oracle providers Chainlink.

Anthropic said its new report aims to the public discussing incidents of misuse to help AI’s greater community and security community and to strengthen the industry’s broader defense against AI abusers.

It said that despite the implementation of “sophisticated safety and security measures” to avoid misuse of claude, Damn actors continue to find ways around them.

Magazine: 3 people who unexpectedly became crypto millionaires … and one that is not