close
close

Islamic State supporters use AI to boost their online support

Islamic State supporters use AI to boost their online support

Beirut: Days after the deadly Islamic State attack on a Russian concert hall in March, a man in military uniform and helmet appeared in an online video celebrating the attack that killed more than 140 people.

“The Islamic State has dealt a severe blow to Russia with a bloody attack, the most violent in years,” the man said in Arabic, according to the SITE Intelligence Group, an organization that tracks and analyzes such online content.

But the man in the video that Thomson Reuters Foundation was unable to see for itself was not real – it was created using artificial intelligence, according to SITE and other online researchers.

Federico Borgonovo, a researcher at the Royal United Services Institute, a London-based think tank, traced the AI-generated video to an IS supporter active in the group’s digital ecosystem.

This person combined statements, bulletins and data from the Islamic State’s official news channel to create the video using artificial intelligence, Borgonovo said.

Although the Islamic State has been using artificial intelligence for some time, Borgonovo said the video is an “exception to the rule” because the production quality is high, even if the content is not as violent as other online posts.

“For an AI product, it’s pretty good. But in terms of the violence and the propaganda itself, it’s average,” he said, noting that the video shows how IS supporters and allies can increase the production of sympathetic content online.

Digital experts say groups like ISIS and far-right movements are increasingly using AI online and testing the limits of security controls on social media platforms.

According to a study published in January by the Combating Terrorism Center at West Point, AI could be used to generate and spread propaganda, recruit people using AI-powered chatbots, carry out attacks using drones or other autonomous vehicles, and launch cyberattacks.

“Many assessments of the risks of AI, and even specifically the risks of generative AI, only superficially consider this particular problem,” says Stephane Baele, professor of international relations at the University of Louvain in Belgium.

“Large AI companies that seriously address the risks of their tools, sometimes publishing detailed reports outlining those risks, pay little attention to extremist and terrorist uses.”

Regulations for artificial intelligence are still being developed around the world and the pioneers of this technology have stated that they are committed to ensuring that the technology is safe and secure.

Technology giant Microsoft, for example, has developed a Responsible AI Standard that aims to base AI development on six principles, including fairness, reliability and security, privacy and protection, inclusivity, transparency and accountability.

In a special report earlier this year, Rita Katz, founder and CEO of SITE Intelligence Group, wrote that a range of actors – from members of the militant group al-Qaeda to neo-Nazi networks – are capitalizing on the technology.

“It is hard to underestimate what a gift AI is to terrorists and extremist communities for whom the media is their lifeblood,” she wrote.

Chatbots and cartoons:

At the height of its power in 2014, the Islamic State gained control over large parts of Syria and Iraq and established a reign of terror in the areas under its control.

The media played an important role in the group’s arsenal and online recruitment had long been crucial to its business.

Despite the collapse of the self-proclaimed caliphate in 2017, its followers and allies still preach their doctrine online and try to convince people to join them.

Last month, a security source told Reuters that France had identified a dozen ISIS-K leaders based in countries around Afghanistan who had a strong online presence and were trying to persuade young men in European countries interested in joining the group abroad to carry out attacks at home instead.

ISIS-K is a resurgent wing of the Islamic State, named after the historical region of Khorasan, which included parts of Iran, Afghanistan and Central Asia.

Analysts fear that AI could facilitate and automate the work of such online recruiters.

Daniel Siegel, an investigator at social media research firm Graphika, said his team had come across chatbots imitating dead or imprisoned IS fighters.

He told the Thomson Reuters Foundation Although it is unclear whether the bots originated from the Islamic State or its supporters, the threat they pose is nevertheless real.

“Now (ISIS supporters) can build these real relationships with bots that represent a possible future where a chatbot could encourage them to commit kinetic violence,” Siegel said.

Siegel interacted with some of these bots as part of his research and found that their responses were generic, but he said that could change as AI technology advances.

“I am also concerned about the way synthetic media will enable these groups to incorporate their content, which previously existed in silos, into our mainstream culture,” he added.

This is already happening: Graphika tracked videos of popular cartoon characters such as Rick and Morty and Peter Griffin singing Islamic State anthems on various platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *