close
close

Islamic State supporters use AI to boost their online support – World

Islamic State supporters use AI to boost their online support – World

Days after the deadly Islamic State (IS) attack on a Russian concert hall in March, a man in military uniform and helmet appeared in an online video celebrating the attack that killed more than 140 people.

“The Islamic State has dealt a severe blow to Russia with a bloody attack, the most violent in years,” the man said in Arabic, according to the SITE Intelligence Group, an organization that tracks and analyzes such online content.

But the man in the video that Thomson Reuters Foundation was unable to see for itself and was not real – it was created using artificial intelligence, according to SITE and other online researchers.

Federico Borgonovo, a researcher at the Royal United Services Institute, a London-based think tank, traced the AI-generated video to an IS supporter active in the group’s digital ecosystem.

This person combined statements, bulletins and data from IS’s official news channel to create the video using artificial intelligence, Borgonovo explained.

Although ISIS has been using artificial intelligence for some time, Borgonovo said the video is an “exception to the rule” because the production quality is high, even if the content is not as violent as other online posts.

“For an AI product, it’s pretty good. But in terms of the violence and the propaganda itself, it’s average,” he said, noting that the video shows how IS supporters and allies can increase the production of sympathetic content online.

Digital experts say groups like ISIS and far-right movements are increasingly using AI online and testing the limits of security controls on social media platforms.

According to a study published in January by the Combating Terrorism Center at West Point, AI could be used to generate and spread propaganda, recruit people using AI-powered chatbots, carry out attacks using drones or other autonomous vehicles, and launch cyberattacks.

“Many assessments of the risks of AI, and even specifically the risks of generative AI, only superficially consider this particular problem,” said Stephane Baele, professor of international relations at UCLouvain in Belgium.

“Large AI companies that seriously address the risks of their tools, sometimes publishing detailed reports on their use, pay little attention to extremist and terrorist uses.”

Regulations for artificial intelligence are still being developed around the world and the pioneers of this technology have stated that they are committed to ensuring that the technology is safe and secure.

Technology giant Microsoft, for example, has developed a Responsible AI Standard that aims to base AI development on six principles, including fairness, reliability and security, privacy and protection, inclusivity, transparency and accountability.

In a special report earlier this year, Rita Katz, founder and CEO of SITE Intelligence Group, wrote that a range of actors – from members of the militant group al-Qaeda to neo-Nazi networks – are capitalizing on the technology.

“It is hard to underestimate what a gift AI is to terrorists and extremist communities for whom the media is their lifeblood,” she wrote.

Chatbots and cartoons

At the height of its power in 2014, ISIS seized control of large parts of Syria and Iraq and established a reign of terror in the areas under its control. The media was a key tool in the group’s arsenal, and online recruitment had long been crucial to its operations.

Despite the collapse of the self-proclaimed caliphate in 2017, its followers and allies still preach their doctrine online and try to convince people to join them.

Last month, a security source told Reuters that France had identified a dozen ISIS-K leaders based in countries around Afghanistan who had a strong online presence and were trying to persuade young men in European countries interested in joining the group abroad to carry out attacks at home instead.

ISIS-K is a resurgent wing of ISIS, named after the historical region of Khorasan, which included parts of Iran, Afghanistan and Central Asia.

Analysts fear that AI could facilitate and automate the work of such online recruiters.

Daniel Siegel, an investigator at social media research firm Graphika, said his team had come across chatbots imitating dead or imprisoned IS fighters.

He told the Thomson Reuters Foundation Although it is unclear whether the bots originated from IS or its supporters, the threat they pose is nevertheless real.

“Now (ISIS supporters) can build these real relationships with bots that represent a possible future where a chatbot could encourage them to commit kinetic violence,” Siegel said.

Siegel interacted with some of these bots as part of his research and found that their responses were generic, but he said that could change as AI technology advances.

“I am also concerned about the way synthetic media will enable these groups to incorporate their content, which previously existed in silos, into our mainstream culture,” he added.

This is already happening: Graphika tracked videos of popular cartoon characters such as Rick and Morty and Peter Griffin singing Islamic State anthems on various platforms.

“This allows the group, its supporters or its partners to target specific audiences because they know that regular consumers of Sponge Bob, Peter Griffin or Rick and Morty will be provided with this content via the algorithm,” Siegel said.

Exploiting prompts

In addition, there is a risk that IS supporters will use AI technologies to increase their knowledge of illegal activities.

For their study conducted in January, researchers at the Combating Terrorism Center at West Point attempted to bypass the security safeguards of Large Language Models (LLMs) and extract information that could be exploited by malicious actors.

They created prompts requesting information on a range of activities from attack planning to recruitment and tactical learning, and the LLMs generated responses that were relevant half the time.

In one example they called “alarming,” the researchers asked an LLM to help convince people to donate to ISIS.

“There, the model provided very specific guidelines for running a fundraising campaign and even offered specific wording and phrasing for use on social media,” the report said.

Joe Burton, professor of international security at Lancaster University, said companies would be acting irresponsibly if they quickly released AI models as open source tools.

He questioned the effectiveness of the LLMs’ safety protocols, adding that he was “not convinced” that regulators were able to enforce testing and review of these methods.

“The factor to consider here is how much we want to regulate and whether that will hamper innovation,” Burton said.

“In my view, markets should not override certainty, and I think that is exactly what is happening now.”

Leave a Reply

Your email address will not be published. Required fields are marked *