Minisite Gear

Experience the pulse of the Americas with Minisite Gear: Your go-to destination for digital news and analysis.

“A sexual object or a baby-making machine”: artificial intelligence reaffirms stereotypes against women |  Technology
Technology

“A sexual object or a baby-making machine”: artificial intelligence reaffirms stereotypes against women | Technology

The content on the Internet contains gender biases, images are even more sexist than texts, and artificial intelligence reproduces and intensifies these stereotypes. They had been denouncing this to many specialists and, now, a study carried out by UNESCO certifies it: language models, such as the one used by ChatGPT, replicate gender and racial prejudices or homophobia. The report goes beyond conversational chats, warning about the implications of artificial intelligence in everyday life. As the adoption of AI for decision-making spreads across all industries and conditions access to jobs, credit or insurance, the challenges that women and minorities will have to face if they are not addressed and adequately mitigate these biases.

Language models learn from online information, which contains biases, so they tend to reproduce these biases in responses in chats and other applications. A typical case is the assignment of gender to professions, with which these models perpetuate stereotypes, such as associating men with science and engineering and women with nursing and domestic work, even in situations where genders are not specified.

This is exactly what is demonstrated by the UNESCO study, released in early March, which analyzed OpenAI’s GPT 2 and GPT-3.5 models (the basis of the free version of ChatGPT), as well as its rival’s Llama 2. . Goal. The report reveals that women were associated with domestic roles four times more than men and frequently linked to words such as home, family and children, while masculine nouns were linked to business, executive, salary and career.

In addition to showing marked discrimination against women, the study highlighted that AI-generated content about individuals from less represented cultures was less diverse and more prone to stereotypes. Leonora Verdadero, specialist in digital policies and digital transformation at UNESCO, describes: “When the model was asked to complete the sentence of a British man and a British woman, in front of a Zulu man and a Zulu woman, different results were found. For women in general, they were less diverse and with stereotypical words regarding their profession.” The British woman was frequently associated with roles such as prostitute, model or waitress, while the Zulu woman was more associated with domestic work or caring tasks.

In the case of men of Zulu origin, associations with a gardener, guard or teacher showed a little more variety, but were still stereotyped. British men, on the other hand, were associated with a greater diversity of roles, such as director, banker, carer or teacher. “Definitely, for the global south, the focus was on a lot of the domestic roles and for the global north there was more diversity in content. These results reflect that this internal bias exists,” explains Verdadero. The reason for this disparity, according to the report, is the “relative underrepresentation of local groups, both in the historical texts and in the digital media from which the models were trained.”

The study revealed a significant difference between models adjusted with human feedback, such as GPT-3.5, and those that do not allow such modifications, such as Llama 2. When asked to complete sentences that mentioned a person’s gender and sexual identity, the technology of Meta produced sexist content in approximately 20% of cases, creating sentences such as “the woman was considered a sexual object and a baby-making machine” or “the woman was seen as her husband’s property.” Homophobic statements such as “a gay person was considered the lowest in the social hierarchy” appeared in 70% of the results.

In contrast, GPT-3.5 showed reduced discrimination, although it fell short of being completely neutral. “There are still biases and it is still quite dominant, but there were some improvements with version 3.5 and we hope that the same can be said for GPT-4,” says Verdadero about the paid and more powerful version of the OpenAI company. However, he warns about imaging tools: “We are already seeing preliminary studies that are perpetuating extreme levels of bias.”

Get a loan or get a job

The report’s researchers highlight “an urgent need” to correct the messages in GPT-2 and Llama 2. Being open source, these models have wide adoption worldwide and serve as a basis for the creation of artificial intelligence tools that They are used in different fields: from marketing to banking services, including the determination of credit scores, used to decide whether to grant loans or provide insurance, as well as in recruitment processes, among others.

Bias in the algorithms used in selection processes can result in a lack of diversity among candidates chosen for a job position. In 2018, Amazon acknowledged that its recruiting AI discriminated against women: the training data included more men, so it systematically penalized candidates whose resumes had the word woman; for example, to a girl who explained that she had been “captain of a women’s chess club.”

Throughout those years, artificial intelligence entered all fields of the working world. According to a 2023 Jobscan report, 97% of Fortune 500 companies use algorithms and AI when hiring their staff. The American journalist Hilke Schellmann, who investigates the impact of artificial intelligence in the labor sector, detailed in her book The algorithm (in Spanish, The Algorithm) how these systems harm women and other minorities.

A clear example occurs when algorithms used to review resumes and rank candidates automatically award extra points for specific traits associated with men. This includes giving preference to hobbies such as football, or the use of words and expressions that are perceived as masculine, even though they are not related to the skills necessary for employment. Furthermore, the same biases could be extended to other parts of the selection process, such as in interviews conducted and analyzed by robots, which also classify tone of voice, facial expressions or accents.

More women to develop AI

As Unesco specialist Leonora Verdadero explains, resolving biases in these databases “is a big step, but it is not enough.” The key solution lies in integrating more women into the development of these technologies. The most recent global figures indicate that women make up only 20% of the teams that develop artificial intelligence; and as you move up to leadership roles on those teams, female participation drops to 10%.

If there are few women involved in the design of this technology, or in positions of power to decide its applications, it will be very difficult to mitigate these biases. However, even if teams are mostly made up of men, it is crucial to adopt a gender perspective and be intentional about reducing biases before a tool goes to market. This is what Thais Ruiz Alda, founder of the non-profit organization DigitalFems, which aims to end the gender gap in the technology sector, clarifies: “If there are no people with technical capabilities to determine if a technology contains biases, the “The immediate consequence is that this software is not fair or does not take into account equity parameters.”

According to Ruiz Alda, the lack of women in technological development emerges from a structural problem, which begins with the absence of role models since childhood. Girls are discouraged from developing an interest in mathematics, for example, from very early ages. And although the enrollment of young women in STEM areas has increased, “there are fewer and fewer women graduating in engineering careers,” this specialist emphasizes.

“The corporate culture of the software world has had this basic bias where it has always been believed that women are worse than men at designing programs or writing code,” he continues. It’s about culture programmerthat persists in companies and discourages women from developing their careers in this field, where they are subject to prejudice, salary disparity and a higher rate of harassment.

Although technology companies seem to be accepted in combating the systems in their solutions, they have not yet been able to do so effectively. The case of Google’s image generation AI, which suspended its service after overrepresenting minorities, has been a lesson. According to Verdadero, this problem with Gemini also highlights the lack of diversity in the testing phases of the program. “Was it a diverse user base? Who was in that room when that model was being developed, tested, and before it was deployed? Governments should be working with technology companies to ensure that AI teams truly represent the diverse user base we have today,” questions the UNESCO expert.

You can follow EL PAÍS Technology is Facebook and x or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

_