AI company OpenAI has revealed that it has identified and disrupted several online global influence campaigns that used its technology to manipulate public opinion on a global scale.
Misuse of AI for influence tactics
Bad actors used AI to generate comments for articles, create names and biographies for social media accounts, and translate and proofread texts. They targeted a wide range of topics, including politics, the war in Ukraine, LGBTQ+ rights and elections. These deceptive influence campaigns demonstrate the scale of the challenges posed by AI technologies in terms of misinformation and manipulation of public opinion.
Disruption of five influencer campaigns
OpenAI claims to have disrupted five foreign covert influence operations in the last three months. These include groups such as “Spamouflage”, which used OpenAI for search and content generation, and “Doppelganger”, which generated comments in multiple languages to manipulate public opinion. OpenAI has reaffirmed its commitment to monitoring and preventing the misuse of its technologies.
Implications for the future
This news highlights the potential dangers of misusing AI to manipulate public opinion on a massive scale. This could have significant implications for how AI companies manage and secure their technology in the future. OpenAI said it would continue to monitor and disrupt this type of abusive activity.
Conclusion
OpenAI has put an end to global influence campaigns that abused AI technology to manipulate public opinion. This highlights the potential dangers of misusing AI and the implications for how companies manage this technology in the future. OpenAI said it would continue to monitor and disrupt such abusive activity.