OpenAI said on Friday that it thwarted an Iranian influence campaign that used ChatGPT to generate fake news stories and social posts aimed at Americans. The company said it identified and banned accounts generating content for five websites (in English and Spanish) pretending to be news outlets, spreading “polarizing messages” on issues like the US presidential campaign, LGBTQ+ rights and the war in Gaza. The operation was identified as “Storm-2035,” part of a series of influence campaigns Microsoft identified last week as “connected with the Iranian government.

” In addition to the news posts, it included “a dozen accounts on X and one on Instagram” connected to the operation. OpenAI said the op didn’t appear to have gained any meaningful traction. “The majority of social media posts that we identified received few or no likes, shares, or comments,” the company wrote.

In addition, OpenAI said that on the Brookings Institution’s Breakout Scale , which rates threats, the operation only charted a Category 2 rating (on a scale of one to six). That means it showed “activity on multiple platforms, but no evidence that real people picked up or widely shared their content.” OpenAI described the operation as creating content for faux conservative and progressive news outlets, targeting opposing viewpoints.

Bloomberg said the content suggested Donald Trump was “being censored on social media and was prepared to declare himself king of the US.” Another f.