Truth, Inspiration, Hope.

‘Algorithms and Terrorism’: How Artificial Intelligence Could Supercharge Terrorist Activity

Published: September 3, 2024
AI-Scmmers-Regulation-Getty-Images-1247553442
A visitor watches an AI (Artificial Intelligence) sign on an animated screen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona. (Image: JOSEP LAGO/AFP via Getty Images)

According to a recently released report by the United Nations Interregional Crime and Justice Research Institute entitled,”Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes,” experts fear that terrorist activity could be supercharged if bad actors were to use artificial intelligence with malicious intent. 

“The reality is that AI can be extremely dangerous if used with malicious intent,” Antonia Marie De Meo, director of the institute, wrote in the report. 

“With a proven track record in the world of cybercrime, it is a powerful tool that could conceivably be employed to further facilitate terrorism and violent extremism conducive to terrorism,” she added, 

De Meo says that terrorists could use anything from self-driving cars to facilitate bombings to augmented cyber attacks that would be more destructive.

She also fears that terrorist organizations could use AI to find easier paths to spread hate speech, incite violence, or recruit new members. 

Her report concludes that law enforcement agencies must strive to stay ahead of the technology in order to counter the threat. 

READ MORE:

Anticipating its use

The report says that law enforcement is tasked with a tall order, anticipating how terrorists could possibly use the technology in ways that no one has considered before, and then figuring out how to stop these bad actors from employing these methods. 

The report echoes a collaborative study between NATO COE-DAT and the U.S. Army War College Strategic Studies Institute, “Emerging Technologies and Terrorism: An American Perspective,” which argued that terrorist groups are already exploiting AI to recruit and carry out attacks. 

In the study’s forward, the authors wrote, “The line between reality and fiction blurs in the age of rapid technological evolution, urging governments, industries, and academia to unite in crafting frameworks and regulations.”

The study provides examples as to how terrorist organizations are already using the technology to their advantage, including how bad actors use OpenAI’s ChatGPT to “improve phishing emails, plant malware in open-coding libraries, spread disinformation and create online propaganda.”

“Cybercriminals and terrorists have quickly become adept at using such platforms and large language models in general to create deepfakes or chatbots hosted on the dark web to obtain sensitive personal and financial information or to plan terror attacks or recruit followers,” the authors wrote.

As AI models become more sophisticated the authors believe their malicious use will only increase in the future and they argue that how these models work, specifically, how “sensitive conversations and internet searches are stored,” will require more transparency and controls. 

READ MORE:

Terrorist’s ‘jailbreaking’ AI models

According to more research, published by West Point’s Combating Terrorism Center, published earlier this year, terrorists have moved beyond improving their current tactics to “jailbreaking” AI platforms.

“Specifically, the authors investigated the potential implications of commands that can be input into these systems that effectively ‘jailbreak’ the model, allowing it to remove many of its standards and policies that prevent the base model from providing extremist, illegal, or unethical content,” the authors wrote. 

The authors explored five different AI platforms to see how they could be exploited. 

They found that Google’s Bard was the most resilient to jailbreaking, followed by ChatGPT models.

The study concluded that jailbreak guardrails need to be constantly reviewed and that “increased cooperation between private and public sectors” will be required to keep these guardrails intact and up-to-date.