[Guest essay] Lessons from my time scrubbing sexually exploitative material from the web

Posted on : 2024-09-05 17:11 KST Modified on : 2024-09-05 17:31 KST
The biggest challenge when fighting digital sex crimes is their overwhelming breadth and scale
Members of the Seoul Women’s Association and branches of its feminist university student groups hold an urgent press conference near Gangnam Station in Seoul on Aug. 29, 2024, to condemn deepfake sex crimes. (Yonhap)
Members of the Seoul Women’s Association and branches of its feminist university student groups hold an urgent press conference near Gangnam Station in Seoul on Aug. 29, 2024, to condemn deepfake sex crimes. (Yonhap)


By Bang Hye-rin, manager of national security monitoring team at the Center for Military Human Rights Korea

Before focusing on my master’s thesis, I worked for a brief period for an organization supporting victims of cyber sex crimes. My job was to put each and every case of illegally filmed footage assigned to me through a search engine and if I found that the images had been uploaded to certain sites, I would beg the operator of that site to take those images down. 

The process of searching and subsequently scrubbing illegally filmed footage probably isn’t what most people would expect. There is no advanced AI that categorizes footage based on the faces of victims, lists the sites the footage is uploaded to, and sends automated requests for the deletion of such posts. Instead, employees and volunteers divide each reported video frame by frame, put each fragmented frame through search engines, organize each site the footage is posted to on a spreadsheet, collate the sites for verification, find the site operator’s email or contact channel, and write a message that read, “This video is illegally filmed footage so we ask that you remove it from your site. If it is not removed, the South Korean government may take action against the site pursuant to relevant laws,” translate it from Korean into English and then send it to the operator.

This happened daily. Sometimes, you would have to go through 100 or 200 cases a day. I’ve assigned a nickname to this procedure: “The Illegally Filmed Footage Removal Protocol that Seems Absurdly Advanced, But Basically Follows the Same Grueling Procedures as a Sweatshop.”

The biggest challenge when fighting digital sex crimes is their overwhelming breadth and scale. It was impossible for us, as we sat in front of our computer screens, to estimate which far-out depths of the internet any of the footage had reached. 

Even if we painstakingly found 20 posts using a particular photo and saw that every post was erased, the next day we would see the photo spring up in 40 different posts. Websites distributing illegally filmed footage make various backup sites with different domains to make sure that sites are up and running even if the main site is taken down. 

Many of the sites in question have servers based abroad, so even if you sent a beseeching email citing South Korean laws, they simply could just pretend that they never saw the email.

The terror of that vast scope goes beyond distribution and duplication. While deepfake pornography has been in the news recently, photoshopping a real person’s face onto a pornographic image has been a crime for some time now.

The difference is that producing such images used to be time-consuming and technically challenging, requiring the “manual” manipulation of images. But the new tool of AI has made it easy for anyone to instantaneously create deepfakes in a dizzying variety of formats.

Another frightening aspect is that anyone with access to photographs on social media can choose victims at random. And since the images are “fakes” created by AI, the perpetrator can deliberately duck the guilt of harming a real person.

Deepfake creation has spread so rapidly because it gives perpetrators a perverse sense of power over their victims — the ability to create dozens of humiliating images of someone from photographs scraped off Instagram — while also enabling them to ignore victims’ suffering because the images aren’t technically “real.” In short, deepfakes represent a game-changing acceleration of the production cycle of sexually exploitative media.

Those images spread far too fast for the handful of employees at nonprofits to keep up with. Facing such a vast challenge, permanent employees began to drift away, and their positions were once again filled by people on short-term contracts.

Lee Jun-seok, a lawmaker with the Reform Party, said during a meeting of the National Assembly’s Science, ICT, Broadcasting and Communications Committee that the 220,000 members of a deepfake channel on Telegram was an “overblown threat” and estimated that, given the percentage of Korean users on Telegram, only about 726 of the channel members are actually Koreans.

But what does it matter whether there are 220,000 Koreans on the channel or just 726?

Let’s suppose there aren’t even 726, but just 10 people in the group — they could still produce 220,000 deepfakes if they set their mind to it. Those images would then be copied and circulated beyond their point of origin and around the world, perhaps remaining permanently in some dark corners of the Internet without ever being deleted.

That’s the nature of sex crimes in the digital age.

So assuming that the criminal potential of this technology remains the same regardless of whether the channel has 220,000 members, 726 members or even just 10, I can’t help wondering what Lee thinks would be an acceptable number of deepfake purveyors that would not constitute an “overblown threat.”

Please direct questions or comments to [english@hani.co.kr]

button that move to original korean article (클릭시 원문으로 이동하는 버튼)

Related stories

Most viewed articles