close
close

OpenAI disables ChatGPT accounts linked to Iranian disinformation machine – Firstpost

OpenAI disables ChatGPT accounts linked to Iranian disinformation machine – Firstpost

In addition to social media, disinformation actors used ChatGPT to create five websites posing as legitimate news outlets and representing both progressive and conservative viewpoints.
read more

OpenAI recently took action against Iranian state-backed election hackers by disabling several ChatGPT accounts that were used as part of an Iranian disinformation campaign aimed at compromising the upcoming U.S. elections.

These accounts had used the AI ​​tool to create and spread fake news articles and social media comments. The operation was the first case identified by OpenAI that focused on the US election, raising concerns about the potential misuse of AI to disrupt the 2024 election process.

The urgency of the situation lies in the growing interest of nation-state adversaries to interfere in the upcoming US elections. Experts fear that tools like ChatGPT could significantly increase the speed and efficiency of disinformation creation and facilitate the spread of false narratives.

OpenAI’s investigation found that the disinformation activities were linked to a group called Storm-2035, which has a history of creating fake news websites and spreading them on social media to influence public opinion. The affected accounts not only generated content related to the US presidential election, but also covered other sensitive topics such as the conflict between Israel and Hamas and Israel’s participation in the Olympics.

The broader context of this disinformation campaign stems from recent findings from Microsoft, which had previously identified the same Iranian group in connection with spear-phishing attacks on US presidential campaigns. OpenAI discovered that the group had operated a number of new social media accounts specifically designed to spread this misleading content.

As part of its investigation, OpenAI identified and closed a dozen accounts on X (formerly known as Twitter) and one Instagram account. These accounts were part of a broader effort to spread false news and influence public discourse. In response, Meta, Instagram’s parent company, also deactivated the account in question, citing its connection to a previous Iranian campaign targeting users in Scotland. X has not yet commented on the situation, but OpenAI has confirmed that the affected social media accounts are no longer active.

In addition to social media, disinformation actors created five websites posing as legitimate news outlets representing both progressive and conservative viewpoints. These sites published AI-generated articles, one of which speculated about a possible running mate for Vice President Kamala Harris and falsely suggested a calculated move toward unity.

Despite the sophisticated efforts of these disinformation campaigns, OpenAI found that most social media accounts sharing the AI-generated content failed to achieve significant engagement, highlighting the difference between simply posting something online and actually reaching and influencing a large audience.

The discovery of these accounts was made possible by using new tools developed by OpenAI that have been improved since the last threat report in May. These tools played a crucial role in discovering the accounts following Microsoft’s earlier disclosures.

The far-reaching implications of this discovery underscore the ongoing threat posed by foreign influence operations, particularly in the run-up to the November election. While the full impact of these operations remains uncertain, continued vigilance and development of detection tools will be critical to countering such threats.

In a related development, Google has also issued an alert about Iranian threat actors targeting the US presidential election. This follows Microsoft’s previous findings and provides further evidence of these actors’ ongoing efforts to influence the electoral process. Google’s report identified a threat group called APT42 that has targeted various organizations associated with the US election through phishing attacks and social engineering tactics. These attacks included attempts to compromise the Gmail accounts of high-profile individuals associated with both the Trump and Biden campaigns.

APT42’s activities are believed to be linked to the Islamic Revolutionary Guard Corps (IRGC), and their campaigns extend beyond the United States to targets in Israel and other sectors such as the military, defense, and academia. While some of these attacks have been successful, efforts continue to be made to protect key personnel and prevent further attacks. The ongoing threat underscores the need for vigilance in the run-up to the election, with the possibility of increased activity by foreign influence operations remaining a major concern.

Leave a Reply

Your email address will not be published. Required fields are marked *