featured-image

NEW YORK — OpenAI removed a network of Iranian accounts that used its ChatGPT chatbot to try to wage a foreign influence campaign targeting the U.S. presidential election by generating longform articles and social media comments, the company said Friday.

The accounts created content that appeared to be from liberal and conservative-leaning users, including posts suggesting former President Donald Trump was being censored on social media and was prepared to declare himself king of the U.S. Another described Vice President Kamala Harris’ selection of Tim Walz for her running mate as a “calculated choice for unity.



” The influence campaign, which included posts about Israel’s war on Gaza, the Olympic Games in Paris and fashion and beauty subjects, doesn’t appear to have received significant audience engagement, said Ben Nimmo, investigator on OpenAI’s Intelligence and Investigations team, in a news briefing Friday. “The operation tried to play both sides, but it didn’t look like it got engagement from either,” he said. The Iranian operation is the latest suspicious social media effort that used AI only to fail to get much traction, a possible indication foreign operatives still are figuring out how to capitalize on a new crop of artificial intelligence tools that can quickly spit out convincing writing and images for little to no cost.

Microsoft Corp. in June said it had detected pro-Russian accounts trying to amplify a fabricated video showing violence at the Olympics. And Meta Platforms Inc.

this year said it had removed hundreds of Facebook accounts associated with influence operations from Iran, China and Russia, some of which relied on AI tools to spread disinformation. OpenAI on Friday didn’t specify the number of accounts it removed. The startup said it also identified a dozen accounts on X, formerly Twitter, and one on Instagram involved in the effort.

Instagram removed the account in question, which generally was focused on Scotland and posted about food. X did not immediately respond to a request for comment. The disclosure comes after suspected Iranian hackers compromised Trump’s political campaign, sparking a federal investigation into possible foreign meddling ahead of the U.

S. elections in November. The U.

S. intelligence community has consistently warned about foreign governments trying to shape Americans’ opinions. The Office of the Director of National Intelligence in July said that Iran, Russia and China were recruiting people in the U.

S. to try spreading their propaganda. OpenAI in May said that networks from Russia, China, Iran and Israel had tried using the company’s AI products to boost their propaganda efforts.

At the time, OpenAI said the networks it disrupted had used AI to generate text and images in a larger volume than otherwise would have been possible by human creators, helping the content appear more authentic. However the campaigns still failed to generate significantly more engagement, according to the startup..

Back to Beauty Page