Microsoft IDs Developers Behind Generative AI Hacking-for-Hire Scheme
REDMOND, WA — Microsoft has unmasked several individuals accused of operating an international hacking-for-hire scheme involving generative AI tools. According to an amended complaint filed in Virginia, the company identified actors from Iran, Hong Kong, Vietnam, and the UK as key players in what it described as a “global cybercrime network” selling access to jailbroken AI accounts.
The Allegations
Initially, Microsoft had only identified 10 unnamed individuals by their websites and tools. The new filing names Arian Yadegarnia (aka “Fiz”) of Iran, Ricky Yuen (aka “cg-dot”) of Hong Kong, Phát Phùng Tấn (aka “Asakuri”) of Vietnam, and Alan Krysiak (aka “Drago”) of the UK as members of the group Microsoft tracks as Storm-2139. Two additional suspects in Illinois and Florida were also linked but not named publicly to avoid disrupting criminal investigations.
The accused allegedly hijacked Microsoft API keys and sold access to Azure OpenAI accounts capable of bypassing built-in safeguards. These accounts were resold to customers overseas who used them to generate harmful and prohibited content, including non-consensual imagery of celebrities and other malicious material.
A Global Hack-for-Hire Network
Microsoft described the defendants as being “at the center of a hire a hacker style cybercrime network,” blending stolen credentials, illicit access, and AI manipulation. Some members allegedly charged over $100 per account, advertising services through Discord servers, GitHub repositories, and other online platforms. Internal chat logs released by Microsoft showed group members panicking after the lawsuit became public, blaming one another and leaking personal details of Microsoft lawyers involved in the case.
Methods and Impact
According to court filings, the group exploited exposed customer credentials, altered generative AI models, and resold accounts while providing instructions for producing harmful or illicit content. In some cases, they allegedly offered tutorials on crafting malware or generating explicit deepfakes. Microsoft’s Digital Crimes Unit said the defendants operated much like traditional hackers for hire, selling technical access and expertise to buyers worldwide.
Broader Concerns
The case comes as regulators and civil society groups pressure AI companies to prevent their tools from being misused in disinformation campaigns or harmful applications. While U.S. intelligence officials have suggested foreign actors face barriers to accessing high-quality AI for election interference, cases like Storm-2139 highlight how criminal groups attempt to bypass restrictions through hack-for-hire tactics.
“These charges show that hacking-for-hire now extends into AI,” Microsoft’s general counsel noted, stressing that the company will continue referring evidence to U.S. and foreign law enforcement.
Conclusion
The rise of AI hackers for hire underscores the convergence of two powerful forces: generative AI and the underground hire a hacker market. While Microsoft’s legal action has disrupted the Storm-2139 network, the case signals a growing trend where AI capabilities are commodified through the same channels once used for malware, ransomware, and DDoS-for-hire schemes.
★ ★ ★ ★ ☆
HACKERSTARS has a pool of professional, verified, and tested hackers ready to handle your task. Just select one — and consider it done.
Hire a Hacker★ ★ ★ ★ ☆
⚠️ We hunt pedos. Your donations, matched by us, fund hackers who trap predators and wreck their lives. Learn more on the Pedo Hunting page.