OpenAI took decisive action on June 9, 2025, banning ChatGPT accounts linked to state-sponsored hackers from Russia, China, Iran, and North Korea following systematic exploitation of the AI platform for malicious purposes. The banned accounts were used for developing malware, conducting influence campaigns, and executing fraudulent remote job applications worldwide, demonstrating sophisticated abuse of artificial intelligence capabilities.
State-sponsored hackers from four nations systematically exploited ChatGPT for malware development and global influence operations before OpenAI’s decisive intervention.
The exploitation patterns revealed alarming sophistication in malicious usage. Hackers utilized ChatGPT for open-source intelligence gathering on specific entities, with AI assistance for troubleshooting system configurations and modifying scripts. China-linked clusters developed brute-force scripts targeting FTP servers, as attackers researched using large language models to automate penetration testing activities. North Korean IT worker networks employed the platform for deceptive employment campaigns, submitting fraudulent job applications across global markets.
Technical exploitation methods demonstrated advanced understanding of AI vulnerabilities. Threat actors engaged ChatGPT to develop software packages for offline deployment, configure firewalls and name servers for malicious infrastructure, and create both web and Android applications. The platform assisted in developing malicious code designed to manage Android devices for social media manipulation, highlighting the versatility of AI-assisted cybercrime. These attacks exploited the model’s inherent flexibility, making it difficult for detection systems to identify malicious inputs disguised as legitimate prompts.
Simultaneous security concerns emerged regarding a medium-severity vulnerability, CVE-2024-27564, affecting a third-party ChatGPT tool. The flaw permits attackers to inject malicious URLs into input parameters, potentially granting unauthorized access to sensitive information. Over 10,000 attack attempts from a single IP address targeted government and financial sectors within one week, with approximately one-third of organizations remaining vulnerable because of security misconfigurations. The Exploit Prediction Scoring System score increased dramatically from 1.68% to 55.36% as of March 20, 2025. Investigations revealed that the vulnerability is located in pictureproxy.php of ChatGPT commit f9f4bbc.
Integration security risks compound these threats through improperly configured settings that create vulnerabilities when incorporating ChatGPT into existing systems. Attackers can redirect users to malicious URLs from within the AI chatbot, as server-side request forgery allows applications to make unintended requests without authentication requirements.
These exploitation vectors can result in unauthorized access to consumer data, regulatory penalties, reputational damage, and significant business disruption for organizations utilizing generative AI technologies.