Features

AI and Elections: Lessons From South Korea

Recent Features

Features | Politics | East Asia

AI and Elections: Lessons From South Korea

The country was able to limit the impact of AI-generated deepfakes during its recent National Assembly elections. What can other countries learn from its experience?

AI and Elections: Lessons From South Korea

A woman exits a polling booth to cast her ballot for the parliamentary election at a local polling station in Seoul, South Korea, April 10, 2024.

Credit: AP Photo/Lee Jin-man

On April 10, South Korea held its 22nd National Assembly elections, which resulted in the Democratic Party (DP) winning 175 out of 300 seats and the ruling People Power Party (PPP) winning 108. The voter turnout of 67 percent was the highest in 32 years.

Various concerns surfaced prior to the election, including traditional concerns like fraudulent ballots, spy cameras at voting stations, and early voting fraud, as well as concerns about artificial intelligence (AI)-enabled election interference and deepfakes. While the 2022 presidential election embraced and incorporated AI into political campaigns – the PPP’s AI Yoon Suk-yeol, DP candidate Lee Jae-myung’s AI Chatbot, and former Deputy Prime Minister Kim Dong-yeon’s AI spokesperson AiDY and avatar WinDY – AI did not have the same presence in the 2024 elections. However, while AI was not emphasized in the campaigns, AI-generated deepfakes and disinformation were observed.

To further these concerns, the South Korean news, social media, and information landscapes have already been inundated with fake news. A former staff member of the National Assembly Defense Committee called South Korea’s most popular messaging app, Kakao Talk, the country’s “largest source of misinformation” due to the amount of chirashi – fake news and conspiracy theories — forwarded on the platform. The South Korea Press Foundation’s 2018 survey revealed that 69.2 percent of the respondents found manipulated or false information on social media sites.

Additionally, the Korea Press Foundation’s “Digital News Report 2023″ highlighted that 53 percent of South Koreans utilize YouTube as a primary news source, demonstrating a low level of trust in mainstream news outlets. In 2022, YTN, an official South Korean news outlet, published a YouTube video accusing YouTube Channel Gong Byeong-ho TV of spreading misinformation regarding the 2022 presidential election. The comments under the video, however, tended to side with content creator Gong Byeong-ho.

Finally, many South Korean fake news sites exist, including ones that copy legitimate outlets to confuse the audience. For instance, fake news site No Cut Ilbe infringes on the trademarks and name of legitimate site No Cut News.

Given these concerns and landscape, reflections on the 2024 National Assembly elections provide takeaways for combating fake news and AI-enabled false information for future elections – whether in South Korea or abroad.

Previous Cases of False Information

Even prior to the rise in concerns about AI, South Korea has been experiencing the domestic political use of false information. During former President Park Geun-hye’s impeachment in 2017, both political parties used misinformation on social media. For example, anti-impeachment politicians falsely claimed that then-U.S. President Donald Trump opposed the impeachment and that North Korea was behind the move.

The same year, during the controversy over the deployment in South Korea of the U.S. missile defense system, Terminal High Altitude Area Defense (THAAD), rumors spread that THAAD emitted cancer-causing electromagnetic waves. Anti-THAAD DP members attended a candlelight vigil and sang songs that emphasized the threats posed by THAAD threats, and anti-THAAD protests continued into 2020.

In 2018, the Druking-gate scandal revealed that Kim Kyoung-soo, former governor of South Gyeongsang province and an ally of former President Moon Jae-in, collaborated with “power blogger” Druking to manipulate search results and online perceptions of Moon ahead of the 2017 presidential election.

In addition to domestic political rivalries, foreign adversaries – especially North Korea and China – have an invested interest in spreading disinformation as well. The Korea Institute of Liberal Democracy reported in 2018 that North Korea employs 7,000 agents in propaganda and information warfare against South Korea.

In 2022, a North Korean defector – previously a colonel-level deputy at the Reconnaissance General Bureau until 2014 – affirmed that North Korea’s cyber units have been influencing South Korean public opinion since the early 2000s. The defector shared that he was engaged in an influence operation during the 2012 South Korean presidential election, during which he wrote negative comments about candidates Park Geun-hye and Ahn Cheol-soo. Interestingly, the defector mentioned that a North Korean operation was also involved in Druking-gate.

China conducts information operations and disinformation campaigns to undermine South Korean democracy and hinder U.S. strategic interests in Asia as well. Ahead of the 2020 National Assembly elections, a Korean Chinese whistleblower confessed to spreading disinformation to promote pro-China sentiments. In the 2021 presidential election, a professor at Gachon University identified over 50 accounts promoting China’s agenda through comments on Naver, South Korea’s largest web portal. Last year, the National Intelligence Service (NIS) also found 38 Korean-language news sites operated by two Chinese PR firms to promote pro-China and anti-U.S. propaganda.

Information Manipulation in the 2024 National Assembly Election

Similar issues of false information occurred in the 2024 National Assembly elections but involved AI. Between January 29 and February 16, the National Election Commission announced that it found 129 election-related deepfakes.

In December 2023, a 46-second deepfake video of President Yoon supposedly admitting corruption was uploaded. The video went viral in February 2024, leading to the Korea Communications Standards Commission (KCSC)’s unanimous vote to block it. Two days before the election, the police accused a member of Cho Kuk’s new political party as the culprit behind the deepfake. Cho countered that the accusation was the police’s effort to interfere with the election.

In early March, a deepfake video of Yoon seemingly criticizing DP leader Lee Jae-myung circulated on TikTok, along with one of the former interim leader of the ruling PPP, Han Dong-hoon, calling the DP “gangers” for first lady’s recent Dior bag scandal.

Deepfakes have spread on social media platforms due to the accessibility of AI. However, despite Microsoft’s warning about Chinese attempts to influence the U.S., South Korean, and Indian elections in 2024 with AI-generated content and the NIS’s warning of North Korean hackers using AI-generated content to influence the elections, such foreign involvement was not observed.

Efforts Against Election-Related False Information

The limited impact of AI-enabled false information may be due to the private and public sector efforts to protect the election’s integrity.

In May 2022, before the provincial elections, a deepfake video of Yoon seemingly endorsing a local candidate circulated, leading to the 2023 revision of the Public Official Election Act. The revision banned election-related deepfake videos, photos, and audio in the campaign period of 90 days before election day. Violators can face up to seven years in prison or a fine of up to $37,500. The use of Chat GPT to generate campaign mottos, song lyrics, or speeches is still permitted.

Following this revision, the private sector took mitigation approaches as well. Naver announced in January that its AI chatbot service will not generate inappropriate content and that it will monitor its platform for content violating the revised election act. In March, Kakao launched its Karlo AI Profile,” which creates watermarks on AI-generated content. Deepbrain AI, a Korean generative AI company, also announced a collaboration with South Korea’s National Police Agency to customize a detection solution that tracks down deepfakes and responds to election crimes.

Additionally, private sector initiatives such as the joint declaration to prevent malicious use of election-related deepfakes – signed by Naver, Kakao, and SK Communications – and the global AI Elections Accord – joined by LG – also surfaced.

Takeaways for Combating False Information

While South Korea still doesn’t have an AI law, its recent election, efforts to combat AI-enabled false information, and information landscape provide a few notable takeaways.

First, the public and private sector must collaborate to combat deepfakes. As seen with South Korea’s efforts, while the public sector should pass regulations to ban malicious deepfakes, the private sector must build the technological solutions or take the mitigation steps needed to combat the threat. Thus, public-private cooperation in creating and enacting AI-related regulations would increase effectiveness.

Second, in the combat against AI-enabled false information, crackdowns must be approached with caution. In South Korea, such efforts have been utilized for political gain. After the 2014 Sewol Ferry incident, for example, then-President Park Geun-hye cracked down on Kakao, leading many Kakao users to switch to Telegram.

Current President Yoon took similar, if not more concerning, actions. After the 2022 September MBC hot mic clip incident, in which the Korean news channel published an audio clip of the president swearing about U.S. lawmakers, Yoon accused MBC of fake news reporting and banned MBC reporters from the presidential plane.

In September of last year, prosecutors raided homes and offices of Newstapa, the online news outlet of the Korea Center for Investigative Journalism, and confiscated journalists’ cellphones and files. The prosecution accused a reporter of taking a $122,000 bribe for publishing an article in March 2022 that reported Yoon’s decision to not indict Cho Yoo-hyung of a 2011 banking and real estate scandal. While Newstapa was not the only outlet that covered this scandal, it was the online platform that acquired a related audio file.

After the Newstapa raid, the KCSC – typically in charge of blocking gambling, pornography, and North Korean propaganda sites – promised to screen online media and eliminate fake news. Three other channels that covered the story were fined, and officials also raided cable channel JTBC. In December, officials even raided the home of Newstapa’s CEO.

Such media raids are unprecedented and have been uncommon in South Korea since its democratization in the 1990s. The rise of deepfakes and AI-generated false information will only increase politicians’ anti-fake news rhetoric, which needs to be scrutinized for ulterior political motives.