Not if it’s my daughter: the ‘deepfake’ porn user sees it without noticing, but he would report it if the victim was from his environment | Technology
She was not yet Italian prime minister, but she was already popular when a fake pornographic video was published four years ago with Giorgia Meloni’s face on another woman’s body. On July 2, he will be called to testify in a lawsuit against those involved, a 40-year-old man, who created the images, and his 73-year-old father, who provided the line to publish it. They demand 100,000 euros as an “exemplary symbolic measure” that “contributes to the protection of white women from this type of crime,” according to lawyer Maria Giulia Marongiu. The deepfakes, hyper-realistic fake audiovisual materials, have doubled every year since the first complaint was registered in 2017 for non-consensual nudity and little has changed since then. An investigation by Home Security Heroes (HSH) confirms a panorama already identified: 98% is pornography and 99 out of every 100 victims are women and almost all of them are popular.
The most radical change has been technological. If at first knowledge of computers and image editing was required, now, one in every three available tools allows you to create fake creations in less than 25 minutes and at zero cost. Google, which serves as an indicator as it is the predominant search engine, has removed, according to its latest transparency report, 8 billion links. Thousands of them are pages of deepfakes, concentrated in two portals, according to the Lumen database of Harvard University. The technology companies, forced by the new laws, begin to act.
The accessibility of the tools (60% online and 40% downloadable) joins the motivations of abusers, who convince themselves that they only do it out of curiosity, the attraction to famous people, such as the case of the singer Taylor Swift . , and the visualization of a fantasy, according to MSM. This childish perception leads to 74% of users (according to a survey with 1,522 male participants) not feeling guilty.
But this supposed ingenuity is as false as the material it consumes. “It is a problem of sexist violence,” Adam Dodge, founder of EndTAB, a non-profit organization for education in technological uses, tells the Massachusetts Institute of Technology (MIT). The EU Directive on combating violence against women includes these creations as aggression.
And the perception of this attack is so clear that even the vast majority of Internet users deepfakesAccording to the MSM study, making a display of hypocrisy, he would report them if the victim was someone close to him (73%) and he would feel “shocked and outraged” (68%) by the rape of his intimidated person.
The growth of non-consensual nudity has occurred despite laws that condemn these practices and protect victims against the alleged freedom of expression wielded by content creators. “According to article 18.1 of the Constitution, the rights to honor, personal and family privacy and one’s own image are considered fundamental (…) Article 20.4 provides that respect for such rights constitutes a limit to the exercise of freedoms of expression”. This is how Organic Law 1/1982 that regulates this matter begins.
“From a theoretical point of view, there is a possible framework of reference,” explains Ricard Martínez, director of the Chair of Privacy and Digital Transformation at the University of Valencia. In the United States, most complaints fall under the Digital Millennium Copyright Act (DMCA) of 1998.
“When you take the real image of a person, but modify it with any intention, there is an instrumental conduct that consists of treating their image without consent for a purpose that is not lawful,” explains Martínez. “Another thing,” he clarifies, “is a comedian who generates an image with a satirical spirit and in a clear context.”
But these regulations have proven insufficient, which is why Europe approved in November 2022 (it came into force last May) the laws on digital services and markets to “protect the fundamental rights of users and establish equal conditions of competition for companies.” companies”. These regulations require large companies to collaborate in risk assessment, identification, notification and removal of suspicious links.
“There are two important issues: the one that offers the tool, which will always say that its application was not designed to commit a crime, and the one that offers the creation, the one that acts as a speaker. The law imposes more intense collaboration duties on the latter,” adds Martínez.
Google admits the new responsibilities and, in a brief written response, to the increase in complaints, declares: “We have policies for pornography deep non-consensual, so people can remove this type of content that includes your image from search results. And we are actively strengthening additional safeguards to help affected people. Furthermore, we have a takedown process that allows rights holders to protect their work on the Internet.”
Meta is also in this line. Nick Clegg, as president of global affairs, announced on February 6: “We apply labels of Imagined with AI to photorealistic images created with our function, but we also want to be able to do it with content created with tools from other companies.” It refers to Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they implement their plans to add metadata to images created by their tools.
The big technology companies are thus joining the legal crusade against deepfakes and the recent approval of the European law on artificial intelligence, which requires unequivocal labeling of creations developed with this technology. The United States Government is also moving in that direction. “It can no longer be argued that the use of the system or its results respond to the exercise of freedom of expression and freedom of creation,” celebrates the Valencian professor.
“The concern is common and we are beginning to see a confluence of interests from two different legal cultures. The message is beginning to be sent to these companies that not everything goes, that they cannot wash their hands saying ‘hey, I’m just a platform and I can’t be responsible for everything.’ Information society service providers have a decisive influence on the viralization of the content that is displayed. They are not a neutral operator or a mere container. They are part of the operation, of the game,” concludes Ricard Martínez.
You can follow EL PAÍS Technology is Facebook and x or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_