OpenAI said it disrupted five covert influence operations including from China and Russia that attempted to use its artificial intelligence services to manipulate public opinion amid elections.
The threat actors used AI models to generate short comments and longer articles in multiple languages, made up names and bios for social media accounts, conducted open-source research, debugged code, and translated and proofread texts, the company said.
The operations do not appear to have made much impact on audience engagement or in spreading manipulative messages, rating two on the Brookings Breakout Scale that measures impact of influence operations. A score of two on a low-to-high scale that tops out at six implies that the manipulative content appeared on several platforms, but did not have a breakout impact on the audience.