The circulation of deepfake nude photos among Singapore Sports School (SSP) students in early November brought home the dangers of the malicious use of artificial intelligence (AI) as it intersects with issues of consent, sexuality, gender, and cyber awareness. Deepfakes are digitally altered content (image, video, or audio) that use AI to imitate someone’s appearance or voice.
It is not the first time that deepfake images and videos have grabbed the headlines in Singapore. Indeed, in the immediate aftermath of the SSP case, news broke that five ministers were among more than 100 Singapore public servants who received extortion emails with “compromising” deepfake images. Such acts of blackmailing people with sexually compromising material is also known as “sextortion.”
These incidents show that anyone, public figure or not, can be a target of deepfakes. They highlight the harmful spaces and behaviors internet users must navigate daily, especially young people, for whom much of their lives are online. Specifically, the SSP case, in which the deepfake nudes capitalized on the images of existing people, shines a light on the intersecting problems concerning consent, sexuality, gender, and (limited) cyber awareness among young people.
Gen AI and the Problem of Deepfakes
Generative AI (Gen AI) can be used to create deepfake images and video and audio content. The proliferation of and easy access to Gen AI tools has made it accessible not only to ordinary internet users but also to cybercriminals and terrorists.
The tools that create deepfakes are, of course, not only used for malicious purposes. Deepfakes can also provide humor and entertainment – such as face-swapping apps, the TikTok account dedicated to Tom Cruise deepfakes, or the Luke Skywalker deepfake in “The Mandalorian” that captured the attention of Star Wars fans. However, it is close to impossible to ensure their use solely for entertainment or for good purposes at the moment.
Deepfakes have been leveraged in attempts to spread disinformation, including election-related misinformation, to power cybercriminal activities and for blackmail and s/extortion. For instance, examples of deepfake disinformation were present in the recent Indian, Indonesian, and U.S. elections, prompting Singapore to pass a law in October to ban deepfakes of candidates during elections.
Deepfakes can also be used in identity fraud. As Singapore is tied for second with Cambodia in Asia-Pacific for its increase in identity fraud (by a whopping 240 percent), how deepfakes are leveraged in these aspects warrants special attention.
There are also data-related concerns. For instance, even in the case of user-generated deepfakes for innocuous purposes, users have to be mindful of questions such as how their data is stored and used by the application owner companies.
The image and audio input required to generate synthetic images or audio is already decreasing, and newer technologies will continue to make deepfakes more realistic and much easier to access and create.
While some companies try to incorporate guardrails against the misuse of their products, others, especially smaller businesses and startups, may not have the wherewithal and motivation to integrate such safety measures and follow best practices. Experts must continuously research to identify the security gaps and gauge the effectiveness of existing and upcoming measures.
The Gender Angle Should Not Be Missed
SSP students generating and circulating the deepfake nude images of their female classmates also put the dangers faced by women and girls in the spotlight. Deepfake nudes are often linked to the broader issue of non-consensual sharing of intimate images, which is part of the larger issue of image-based sexual abuse (IBSA).
Women can be at a greater risk of being targeted by some – if not all – forms of IBSA. For instance there are studies suggesting the majority of image-based sexual abuse survivors are women, and that men are more likely than women to perpetrate such violence. Those targeted with deepfake nude images (popularly just called “nudes,” further legitimizing Gen AI content) can experience impacts similar to those of sexual assault survivors, with ramifications on a person’s well-being as well as professional or personal reputation. Sexually explicit deepfakes can lead to further objectification of women, cause psychological trauma, taint their reputations, and bring about the risk of physical harm. Indeed, according to SG Her Empowerment (SHE), more than half of the youth participants in one of their studies saw “sexualization/objectification of women” as “a negative effect” of Gen AI.
Singapore has had various cases of IBSA targeting women come to light over the years, such as real or manipulated faces superimposed on topless bodies or in compromising positions, video voyeurism, and upskirt videos. Among them, the recent case of a man who secretly filmed his wife’s niece and edited her face onto pornographic videos points to the ease with which images and videos – usually of women and girls – can be filmed and edited without consent.
Fighting Deepfake Perils
The issue of tackling deepfakes is a complex one. There needs to be a wider discussion of the responsibilities of the developers of the underlying AI, and those who provide code or deepfake creation services to others, making the technology widely accessible. Maria Pawelec rightly noted that actors such as open-source developers and professionals working in large technology companies, specialized startups, and deepfake apps, need to be part of the conversation around AI ethics and governance to curb the dangers of deepfakes.
New technologies, including AI, are also used to detect harmful AI-generated materials. Multiple technology-based detection systems and forensic methods can analyze digital content for inconsistencies typically associated with deepfakes. With time and greater adoption, AI-based detection measures will get more sophisticated and have a greater impact on combating harmful deepfakes.
Policies and legislation are fast emerging in this area, such as Singapore’s Model Governance Framework for Generative AI. However, in cases where there is an absence of enforceable action, governments have to put in protective measures to ensure companies follow best practices, and have avenues to step in when necessary.
A combination of cybersecurity measures, continued policy and regulation efforts, and public awareness will form the cornerstone of the battle against deepfake-related harms. Fighting harmful deepfakes requires a whole-of-society approach; while people should not lift the heavy burden of fighting malicious deepfakes alone, they need to be armed with the skills to navigate them and know how to act when targeted by them.
Communities With the Power to Combat Deepfakes
Psychological scars from deepfake-related IBSA can linger for survivors, who need to be able to access support and seek redress options easily. There are legal options for survivors in Singapore, as acts involving doctored photos constitute offenses under the Penal Code. However, in cases where survivors do not know the perpetrator or in cases of large online communities, where such acts may come to light very late and/or across geographical boundaries, survivors can face several barriers to reporting, investigation, and legal redress. There is also the emotional toll that accompanies these actions, making it all the more important for the law enforcement agencies and officers to be trained to handle such matters in the appropriate way and with sensitivity.
In the SSP case, several perpetrators faced disciplinary action, including caning, suspension from school, training and boarding, and being barred from sports trips. Parents of the survivors lodged police reports, and some also asked for the perpetrators to receive harsher punishment from the school, such as expulsion, noting that the girls’ sense of safety and security had been compromised.
In such a case, where both survivors and perpetrators are minors and known to each other, some uncomfortable questions need to be asked with regard to intention. Awareness regarding consent and good sexual practices online and the influence of toxic male masculinity on the internet, especially in spaces known as the “manosphere,” are important dimensions of this question.
Awareness plays a crucial role in shaping and guiding online behaviors, and internet users, and young people especially, need regular cyber awareness education and training. This includes awareness about proper online expression, understanding harmful online behaviors and activities, and penalties under existing legislation. It also includes guiding boys and young men away from the dark spaces of the manosphere. The availability of reliable sexuality education resources can go a long way in preventing the creation and circulation of sexually explicit deepfake material. In this regard, the Ministry of Education’s efforts to increase knowledge on sexuality and gender is a crucial endeavor that can be stretched beyond schools.
For government officials and public-facing survivors, awareness about how to respond to deepfakes would include reporting extortionary emails and ignoring instructions to initiate contact or make payments, thereby avoiding monetary loss.
While threats posed by deepfakes will continue to rise, cyber awareness in tandem with cyber hygiene best practices, such as good password practices, refusal to share unverified information, and mindful image sharing, can ensure we future-proof ourselves against them to some extent. After all, whether in the offline world or online, prevention is better than cure.