NEW YORK — OpenAI removed a network of Iranian accounts that used its ChatGPT chatbot to try to wage a foreign influence campaign targeting the U.S. presidential election by generating longform articles and social media comments, the company said Friday.

The accounts created content that appeared to be from liberal and conservative-leaning users, including posts suggesting former President Donald Trump was being censored on social media and was prepared to declare himself king of the U.S. Another described Vice President Kamala Harris’ selection of Tim Walz for her running mate as a “calculated choice for unity.

” The influence campaign, which included posts about Israel’s war on Gaza, the Olympic Games in Paris and fashion and beauty subjects, doesn’t appear to have received significant audience engagement, said Ben Nimmo, investigator on OpenAI’s Intelligence and Investigations team, in a news briefing Friday. “The operation tried to play both sides, but it didn’t look like it got engagement from either,” he said. The Iranian operation is the latest suspicious social media effort that used AI only to fail to get much traction, a possible indication foreign operatives still are figuring out how to capitalize on a new crop of artificial intelligence tools that can quickly spit out convincing writing and images for little to no cost.

Microsoft Corp. in June said it had detected pro-Russian accounts trying to amplify a fabricated video showing violence at the.