The U.S. Department of Defense’s Joint Special Operations Command (JSOC) is reportedly exploring the use of AI to generate sophisticated fake online personas.
A procurement document from JSOC outlines a desire for technologies capable of creating realistic digital identities, complete with images, videos, and audio, for use in social media and other internet platforms.

This move is intended to aid intelligence-gathering efforts by making these AI-generated personas indistinguishable from real human users.
The initiative raises questions about the ethical implications and potential global ramifications.
While the Pentagon has previously employed fake social media users in its operations, the current focus on AI marks a significant evolution in digital intelligence tactics.
Special Operations Forces (SOF) are aiming to utilize this technology to extract information from public online forums more effectively.
This pursuit of AI-generated personas by the Pentagon comes despite ongoing warnings from U.S. government agencies like the NSA and CISA about the dangers of deepfake technology.
They have repeatedly cautioned about the risks of synthetic media contributing to misinformation and global instability.
The Pentagon’s effort to harness AI for digital deception could set a troubling precedent, encouraging other nations to adopt similar strategies.
Heidy Khlaaf, Chief AI Scientist at the AI Now Institute, expressed concerns about the potential consequences of these actions.
“This will only embolden other militaries or adversaries to do the same,” she noted, highlighting the risk of further blurring the lines between fiction and reality in geopolitical affairs.
The Pentagon has a history of interest in deepfake technology, having previously explored its use for influence operations and misinformation campaigns.
Recent reports reveal that SOCOM, part of the Pentagon, is actively seeking advanced methods for digital deception, leveraging technologies like StyleGAN to create convincing fake personas.
This development has raised eyebrows given the U.S. government’s public stance against deepfake use by foreign adversaries.
The contradiction between condemning state-backed deepfakes and pursuing their use domestically could damage U.S. credibility.
Daniel Byman, a security studies professor at Georgetown University, stressed the potential for these actions to undermine domestic trust in government information.
Moreover, this strategic move could trigger similar advances by other nations, legitimizing the tactical deployment of deepfakes for international manipulation.
Both Russia and China have been detected using similar techniques for propaganda, prompting the U.S. to propose a “Framework to Counter Foreign State Information Manipulation” as a defense measure.
As the Pentagon continues to explore AI-driven deepfakes for clandestine online operations, the international community watches closely.
The implications for truth, trust, and global stability hang in the balance as technology blurs the line between digital personas and reality.