ai weaponization by cybercriminals

Cybercriminals are rapidly weaponizing artificial intelligence and large language models (LLMs) through a combination of account hijacking, malicious model development, and automated attack capabilities. Threat actors have established sophisticated operations to harvest and resell access to ChatGPT and OpenAI API accounts through credential stuffing attacks and malware-based theft, facilitating unrestricted generation of phishing lures and malicious scripts as they bypass built-in safety controls. Generative AI has contributed to an alarming 85% increase in cyberattacks according to cybersecurity professionals.

The emergence of specialized Dark LLMs in 2025, including HackerGPT Lite, WormGPT, GhostGPT, and FraudGPT, represents a significant evolution in criminal AI deployment. These maliciously modified models, created by jailbreaking ethical AI systems or modifying open-source platforms like DeepSeek, are intentionally engineered to operate without content restrictions or moral constraints, with access marketed through subscription-based services on dark web forums. Social engineering techniques now appear in 98% of all cyberattacks, demonstrating the widespread adoption of AI-powered manipulation strategies.

Criminal organizations have integrated AI across the entire cyber attack lifecycle, from code generation to campaign optimization. Notable examples include FunkSec’s AI-generated DDoS module and ransomware groups’ deployment of custom ChatGPT-style chatbots, demonstrating how artificial intelligence promotes faster development of sophisticated malware with reduced coding errors.

The financial impact of AI-enhanced cybercrime has reached unprecedented levels, with worldwide costs projected to hit $10.5 trillion annually by 2025. Automated AI systems now rapidly process massive logs of stolen credentials, with services like “Gabbers Shop” offering AI-enhanced validation of stolen data for more precise targeting in future attacks.

Nation-state actors have emerged as particularly concerning developers of malicious AI capabilities, leveraging substantial resources to create proprietary models without ethical safeguards. Unlike criminal groups that typically modify existing systems, state-sponsored APT groups can develop original AI models, operating without the constraints of commercial platforms.

This development, coupled with increasing geopolitical tensions in 2025, has accelerated the integration of offensive AI capabilities into nation-state cyber operations, presenting a growing threat to global digital security.

You May Also Like

Why NIST Thinks AI Can No Longer Be Separated From Cybersecurity Risk Management

Traditional cybersecurity defenses are failing as AI systems create unprecedented vulnerabilities. NIST reveals why separating AI from security is a dangerous mistake.

AI Is Rewriting the Rules of Cybersecurity—But Are We Ready for What’s Next?

While AI saves organizations millions in cybersecurity, it’s simultaneously arming cybercriminals with unprecedented attack capabilities. Are we ready for what’s coming next?

Operant Ai’S Open-Source Woodpecker Engine Challenges Cybersecurity Norms With Simulated Attacks

Open-source Woodpecker Engine shatters elite security testing norms by empowering organizations to defend against AI threats without paying a dime.

Is AI Security Protecting Us—or Quietly Powering the Next Cybercrime Wave?

While AI cybersecurity tools protect us with stunning 60% better detection rates, cybercriminals are rapidly turning these same weapons against us. Your safety hangs in the balance.