Are AI Agents Contributing to Gender Stereotypes?

Some experts are raising the alarm

AI app prompt on smartphone
Is there another way AI is adversely affecting us?
Solen Feyissa/Unsplash

For understandable reasons, much of the alarm being raised about AI technology focuses on a fairly narrow band of adverse effects. AI can hallucinate incorrect answers, which could lead to errors both trivial and damaging. There’s also the phenomenon of AI chatbots encouraging their users to engage in harmful or even fatal behavior. But there’s another side to this as well: are AI agents reinforcing harmful gender stereotypes?

That argument is at the heart of an essay at The Conversation by Ramona Vijeyarasa, a professor at the University of Technology Sydney. Vijeyarasa points out that plenty of AI assistants are coded as female, including Alexa and Siri. When Bill Maher riffed on AI assistants on this week’s Real Time, the image he and his art department chose to represent them was an illustration of a woman. While some of these can be changed, the default options are often feminine rather than masculine.

Vijeyarasa points out that this can lead to problems. “These choices have real-world consequences, normalising gendered subordination and risking abuse,” Vijeyarasa writes, citing a 2025 study revealing that “up to 50% of human-machine exchanges were verbally abusive.”

The authors of that particular work, published in Journal of Development Policy and Practice in August 2025, covered a lot of territory, from the effects of AI voices on care work to how this could lead to “further reproduction of notions of non-consensual sexual activity.”

What Effect Can AI Chatbots Have on Voters?
Scientists discovered some unsettling answers

It isn’t difficult to see where Vijeyarasa is going with this argument. If a man spends enough time shouting at an AI agent’s feminine persona, they might have an easier time shouting at another person who has similar characteristics. Or, as Vijeyarasa puts it, “the design choices behind these technologies — female voices, deferential responses, playful deflections — create a permissive environment for gendered aggression.”

Is there a solution? Vijeyarasa points to a few approaches, from deeper regulation of AI to getting more women involved in the industry as a whole. There probably isn’t one all-encompassing fix for this problem, but that doesn’t mean steps cannot be taken to reduce its harm.

Meet your guide

Tobias Carroll

Tobias Carroll

Tobias Carroll lives and writes in New York City, and has been covering a wide variety of subjects — including (but not limited to) books, soccer and drinks — for many years. His writing has been published by the likes of the Los Angeles Times, Pitchfork, Literary Hub, Vulture, Punch, the New York Times and Men’s Journal. At InsideHook, he has…
More from Tobias Carroll »

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.