
New Delhi – As global security agencies confront terror threats on multiple fronts, a disturbing new trend is emerging: terrorist groups like the Islamic State (IS) are leveraging Artificial Intelligence (AI) to enhance their operations.
The Islamic State began encouraging the use of AI technologies among its operatives as early as 2023. Today, AI is being widely used by the group’s media wings to mass-produce and manipulate propaganda content. With just a single video or image, IS has been generating multiple distorted versions, spreading misinformation more quickly and efficiently.
One major application has been multilingual translation. AI tools help them instantly convert propaganda into various languages, dramatically reducing the time and manpower previously required. According to a report by Tech Against Terrorism, AI is also being used to craft personalized messages aimed at recruitment, particularly through encrypted chat groups.
These AI-generated materials—including speeches, images, and immersive environments—are tailored to appeal to diverse audiences. The ability to customize messaging enables IS to expand its reach. In addition, AI is being used to recycle and repackage old propaganda, giving it new life and helping maintain a constant flow of extremist content.
In May 2025, the Islamic State significantly escalated its AI integration by launching AI-generated news bulletins. These bulletins featured synthetic presenters—avatars that mimic mainstream broadcasters—delivering the group’s propaganda. The broadcasts were produced using text-to-speech AI, which converts written content into lifelike spoken audio, complete with realistic voices.
These bulletins have been shared on encrypted platforms like Rocket Chat, further insulating IS communications from detection and takedown efforts.
A United Nations Interregional Crime and Justice Research Institute (UNICRI) report acknowledged that while AI can be a force for good, it also presents serious risks when exploited by bad actors. The report urged the global community to treat these developments as a wake-up call and ensure such powerful tools do not fall into the wrong hands.
Alarmingly, IS has started using AI for virtual recruitment and operational planning, targeting potential lone-wolf attackers through AI-powered chatbots that simulate personal interaction. Security officials are especially concerned about the possibility of IS using AI not just for influence, but to plan and execute attacks.
There are even indications that the Islamic State is exploring AI-powered automated vehicles for use in attacks, and seeking to exploit weaknesses in traffic management systems. If successful, such tactics could enable the group to hack traffic signals or redirect vehicles, potentially causing massive casualties.
In response, counterterrorism agencies are ramping up efforts to use AI defensively. These include tools that can analyze live video feeds to detect threats in real time, as well as AI systems that can automatically flag and remove extremist content online. Such tools can also help identify and monitor at-risk individuals based on behavioral patterns and online activity.
Security experts emphasize the urgent need for collaboration among governments, tech companies, law enforcement, and civil society to prevent the misuse of generative AI by terrorist organizations.
With inputs from IANS