New Delhi: China-linked influence operations are increasingly turning to generative AI tools to refine disinformation strategies, spread covert state propaganda, smear critics, and create fake online personas — a trend that is raising serious concerns worldwide.
According to a report in The Diplomat, Beijing’s use of AI enables content to be tailored to local languages and cultural contexts, with a particular focus on youth audiences. By exploiting social media’s popularity, pro-China narratives can deceptively build trust among younger generations in developing regions and potentially shape future leadership attitudes.
AI in Influence and Espionage
The report cites an essay published in August by two Vanderbilt University professors, who examined leaked documents tied to the Chinese private firm GoLaxy. These sources revealed that AI was being used not only to produce misleading narratives for audiences in Hong Kong and Taiwan, but also to collect information on US lawmakers — creating detailed profiles potentially useful for espionage or future influence operations.
Fake News Networks and AI Personas
Disclosures from OpenAI, Meta, and Graphika point to a surge in China-linked AI-driven disinformation campaigns. While earlier operations focused on generating fake personas and deepfakes, recent activity shows a systematic effort to build entire fake news websites disseminating Beijing-aligned content in multiple languages.
Graphika’s “Falsos Amigos” report, published last month, identified a network of 11 fake websites created between December 2024 and March 2025, using AI-generated images and logos to appear credible.
Similarly, OpenAI’s June threat report highlighted now-banned ChatGPT accounts that generated names, logos, and profile pictures for fake news outlets and social media personas — including fabricated US veterans critical of the Trump administration in a campaign dubbed “Uncle Spam.” The operation sought to fuel political polarisation in the US, using AI-crafted visuals to mimic authenticity.
Simulated Engagement and Smear Campaigns
Another tactic involves coordinated engagement simulations, where a “main” account posts content followed by replies from AI-generated personas to mimic real discussions. The Uncle Spam operation, for instance, created comments from supposed American users debating tariffs.
AI has also been deployed in smear campaigns. The report highlights the case of Pakistani activist Mahrang Baloch, a critic of China’s investments in Balochistan. A TikTok account and Facebook page circulated a fake video falsely linking her to pornography, amplified by hundreds of AI-generated comments in English and Urdu to simulate organic reactions.
Growing Calls for Action
The findings underscore the urgency for social media platforms, AI developers, and democratic governments to counter China-linked disinformation networks. The increasingly sophisticated use of AI in propaganda poses risks not just to political stability but also to individual reputations and democratic discourse worldwide.
With inputs from IANS