
North Korean state hackers are now using artificial intelligence to forge military IDs, unleashing a new wave of cyberattacks that threaten the integrity of national security systems and expose the dangers of unrestrained technology for American allies.
Story Snapshot
- North Korean hackers used ChatGPT to create fake South Korean military IDs for spear-phishing campaigns targeting high-value individuals.
- This marks a dangerous escalation in AI-driven cyber-espionage, bypassing traditional digital security measures.
- Victims include researchers, journalists, and human rights activists, with ongoing campaigns and undetermined full impact.
North Korean Hackers Exploit Generative AI to Forge Military IDs
In a chilling demonstration of technological abuse, the North Korean Kimsuky group has leveraged advanced generative AI tools—including ChatGPT—to manufacture convincing forgeries of South Korean military identification cards. These deepfake IDs were attached to phishing emails impersonating defense institutions, luring recipients into downloading malware designed to steal sensitive data and allow remote system control. This calculated misuse of AI reveals how hostile actors are weaponizing Western technology to undermine U.S. allies and destabilize the global security order.
Cybersecurity specialists first detected this campaign on July 17, 2025, when Genians Security Center exposed the AI-generated forgeries. The full details were published in September, confirming that the hackers had circumvented standard AI safety protocols using prompt engineering—a tactic where attackers manipulate AI tools to bypass safeguards and produce illicit content. These phishing attacks represent a significant escalation in cyber-espionage, capitalizing on new vulnerabilities introduced by the global proliferation of AI and demonstrating the urgency of updating digital defenses in both the U.S. and allied nations.
Targeted Attacks on Researchers, Journalists, and Human Rights Advocates
The victims of these sophisticated phishing campaigns included prominent researchers, journalists, and human rights activists focused on North Korea. By harnessing AI to forge realistic credentials, the hackers increased the credibility of their social engineering attacks, making it more difficult for even experienced professionals to detect fraudulent communications. The campaign remains ongoing, and the extent of the compromise is still unknown, raising red flags about the safety of sensitive information and the potential for further disruption in academic and advocacy circles committed to exposing totalitarian regimes.
Historically, North Korean groups like Kimsuky and Lazarus have specialized in espionage and financial theft, but the adoption of AI for document forgery marks a new and alarming evolution in their operational toolkit. Past incidents have seen North Korean hackers use AI-generated résumés to infiltrate U.S. tech firms, highlighting a pattern of adapting cutting-edge Western technology for hostile purposes. With each step, the risks to both national and allied security grow—underscoring the failure of previous globalist and “open tech” policies to anticipate or contain such threats.
Weak AI Safeguards, Prompt Engineering, and the Call for Stronger Defenses
Industry leaders including OpenAI and Anthropic have acknowledged their platforms have been repeatedly abused despite built-in restrictions. The hackers’ use of prompt engineering—crafting clever AI queries to bypass content filters—shows the inadequacy of current safeguards and the ease with which determined adversaries can exploit even the most advanced systems. This vulnerability not only undermines the promises of responsible AI development but also exposes the West’s reliance on digital credentials and remote verification, both of which are now easily subverted by AI-driven forgery.
The same technology that powers innovation and economic growth must not be left unguarded for the world’s most dangerous regimes to weaponize. For American conservatives, this is a clear call to reject naïve globalist assumptions about technology and insist on robust, constitutionally grounded cybersecurity policies that put national defense, personal liberty, and the safety of our allies first.
Global and Domestic Security Implications
The impact of North Korea’s AI-enabled cyber-espionage is profound. In the short term, there is a heightened risk of compromised research, stolen data, and the erosion of trust in digital credentials among allies. Over the longer term, the unchecked use of generative AI by enemy regimes will foster broader instability, sap confidence in technology, and embolden adversaries eager to exploit any weakness in Western systems. The U.S. must prioritize safeguarding critical infrastructure and defending against both digital and ideological threats that undermine the Constitution and our way of life.
As the Trump administration restores common-sense national security priorities, this episode is a powerful reminder of why America must lead in both cybersecurity innovation and the responsible governance of emerging technologies. Only by facing these threats head-on—and rejecting the failed policies of the past—can we ensure the future safety and prosperity of our nation and its allies.
Watch the report: ChatGPT forged military IDs for North Korean hackers – YouTube
Sources:
AI-Military IDs North Korea – Infosecurity Magazine
North Korea, China Hackers Infiltrate Companies with AI Resumes, Military ID – Business Insider
North Korean Hackers Use AI to Forge Military IDs – Fox News
North Korea’s New AI-Generated Espionage Tool – Claims Journal
This is North Korea’s New AI-Generated Espionage Tool – Israel Hayom


























