China may be using AI on social media to sway US voters, says Microsoft | China may be using AI on social media to sway US voters, says Microsoft
 
 
 
 
 
 

China may be using AI on social media to sway US voters, says Microsoft

/ 10:59 AM September 07, 2023

Voting booths are pictured inside the Dona Ana County Government Center during early voting for the upcoming midterm elections in Las Cruces, New Mexico, U.S., October 24, 2022. REUTERS/Paul Ratje/File photo

Voting booths are pictured inside the Dona Ana County Government Center during early voting for the upcoming midterm elections in Las Cruces, New Mexico, U.S., October 24, 2022. REUTERS/Paul Ratje/File photo

Microsoft researchers said on Thursday they found what they believe is a network of fake, Chinese-controlled social media accounts seeking to influence U.S. voters by using artificial intelligence.

A Chinese embassy spokesperson in Washington said that accusations of China using AI to create fake social media accounts were “full of prejudice and malicious speculation” and that China advocates for the safe use of AI.

In a new research report, Microsoft said the social media accounts were part of a suspected Chinese information operation. The campaign bore similarities to activity which the U.S. Department of Justice has attributed to “an elite group within (China’s) Ministry of Public Security,” Microsoft said.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

The researchers did not specify which social media platforms were affected, but screenshots in their report showed posts from what appeared to be Facebook and Twitter, now known as X.

The report highlights a fraught social media environment as Americans prepare for the 2024 presidential election.

The U.S. government has accused Russia of meddling in the 2016 election with a covert social media campaign and has warned of subsequent efforts by China, Russia and Iran to influence voters.

ADVERTISEMENT

The report provided limited examples of the recent activity and did not explain in detail how researchers attributed the posts to China.

A Microsoft spokesperson told Reuters that the company’s researcher used a “multifaceted attribution model,” which relies on “technical evidence, behavioral evidence and contextual evidence.”

The campaign began using generative artificial intelligence technology in about March 2023 to create politically charged content in English and “mimic U.S. voters,” Microsoft said.

ADVERTISEMENT

Generative AI can create images, text and other media from scratch.

The new content is much more “eye-catching than the awkward visuals used in previous campaigns by Chinese nation-state actors, which relied on digital drawings, stock photo collages, and other manual graphic designs,” the researchers wrote.

The paper cited an example of one AI-generated image, which Microsoft said came from a Chinese account, that depicts the Statue of Liberty holding an assault rifle with the caption: “Everything is being thrown away. THE GODDESS OF VIOLENCE.”

The Microsoft spokesperson said the identified accounts had attempted to appear American by listing their public location as within the United States, posting American political slogans, and sharing hashtags relating to domestic political issues.

Want stories like this delivered straight to your inbox? Stay informed. Stay ahead. Subscribe to InqMORNING

MORE STORIES
Don't miss out on the latest news and information.
TAGS: artificial intelligence, social media
For feedback, complaints, or inquiries, contact us.
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.




This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.