Can't find what you're looking for?
View all search resultsCan't find what you're looking for?
View all search resultsWith the help of chatbots or fabricated online personas, AI can generate personalized messages and simulate human interaction to influence and radicalize vulnerable individuals.
n April of this year, Mike Burgess, director general of the Australian Security Intelligence Organisation (ASIO), warned that artificial intelligence is likely to make radicalization easier and faster. This statement was not alarmist. It was grounded in growing global evidence that AI is now being exploited by terrorist and violent extremist actors.
Today’s terrorism landscape is rapidly evolving. AI is becoming a powerful tool in that transformation. While AI promises benefits for education, productivity and innovation, it also presents serious threats that require urgent and coordinated responses.
AI has already started to reshape how extremist groups conduct information operations. Generative AI tools are being used to create persuasive propaganda, including deepfake images and videos that are difficult to distinguish from authentic media. These synthetic contents are circulated across social media platforms to spread disinformation, glorify violence and influence public opinion.
A United Nations-supported group, Tech Against Terrorism, reported that extremist actors are using free AI tools to generate visuals that appear credible, avoiding detection systems embedded in regulated platforms. These materials are often produced in two stages. First, a neutral image is created, then ideological messages are added, allowing them to bypass automated content filters.
A February 2024 study examined 286 pieces of AI-generated or AI-enhanced content distributed by pro-Islamic State (IS) accounts. These materials commonly featured IS flags, weapons and militant figures. Many of them escaped detection on social media platforms by subtly altering the visuals, such as by blurring symbols or covering them with digital stickers.
According to a policy paper on the weaponization of AI, such content creation techniques reflect a broader trend in which threat actors use low-cost, freely available AI systems to produce high-volume propaganda at scale. The paper also highlights how AI reduces the operational costs and technical thresholds that previously limited access to such capabilities.
AI’s role in recruitment is equally concerning. Terrorist groups can use AI to identify individuals who show signs of vulnerability to radical messaging. These may be people who engage frequently with violent content or media that portrays alienated and angry antiheroes. With the help of chatbots or fabricated online personas, AI can generate personalized messages and simulate human interaction to influence and radicalize these individuals.
Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.
Thank you for sharing your thoughts. We appreciate your feedback.
Quickly share this news with your network—keep everyone informed with just a single click!
Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
Get the best experience—faster access, exclusive features, and a seamless way to stay updated.