Why AI’s flaws are hurting girls most

Fast Company Tech
by Tarika Barrett
February 27, 2026
Recently, Grok AI faced criticism after users found it was creating explicit images of real people, including women and children. Although xAI has now implemented some restrictions, this incident revealed a serious weakness. Without safeguards and diverse perspectives, girls and women are put at greater risk. The dangers artificial intelligence poses to women and girls are real and happening now, affecting their mental health, safety, healthcare, and economic opportunities. Last fall, a mother discovered why her teenage daughter’s mental health had been deteriorating: It was a result of conversations with a Character.AI chatbot. She’s not alone. Aura’s State of Youth Report, released in December, found that parents believe technology has a more negative effect on girls’ emotions, including stress, jealousy, and loneliness—51% compared with 36% for boys. That’s unacceptable, and we need to do better.  The risks extend beyond mental health. OpenAI recently reported that more than 40 million Americans seek health information on ChatGPT daily. As AI in healthcare expands, the consequences of biased training data can be dangerous. AI models that are trained predominantly on male health data produce worse outcomes for women. For instance, an AI model designed to detect liver disease from blood tests missed 44% of cases in women, compared with 23% in men. Uneven playing field In the workplace, AI is not leveling the playing field. Despite laws prohibiting discrimination, AI-powered hiring tools have repeatedly caused concerns about bias, fairness, and data privacy. A study published by the University of Washington found that in AI resume screenings, the technology favored female-associated names in only 11% of cases.  These failures reflect who is building our technology. Women make up just 22% of the AI workforce. When systems are designed without women’s perspectives, they replicate existing inequities and introduce new risks. The pattern is clear. AI is failing girls and women. Pivotal moment This could not come at a more pivotal moment in the job market. A quarter of the roles on LinkedIn’s latest list of the 25 fastest-growing jobs in the United States are tech-related, with AI engineers at the top. Decisions about how AI is designed today will shape access to jobs, healthcare, education, and civic life for decades. It is critical that women play an active role in developing new AI tools so that inequity is not baked into the systems that increasingly govern our lives. Young women are not disengaged with AI. Research conducted last year by Girls Who Code, in partnership with UCLA, found that young women are deeply thoughtful about the dual nature of technology. They see its potential to advance healthcare, expand educational access, and address climate change. They are also aware of its dangers, such as bias, surveillance, and exclusion from development. This isn’t blind optimism. Instead, it offers a perspective that is often missing in today’s AI development. Creating technology is an exercise of power and holds great responsibility. Since girls are often the most affected by AI’s failures, they must be empowered to help lead the solutions. Women like Girls Who Code alumna Trisha Prabhu, who developed ReThink, an anti-bullying tool, exemplify this. Latanya Sweeney, recognized as one of the top thinkers in AI, founded Harvard’s Public Interest Tech Lab. Their achievements demonstrate the potential when women lead in tech development.  Smart steps If we want safer, more responsible AI systems, three steps are essential. First, computer science education should integrate social impact. Coding cannot be taught in isolation from its consequences. Students should learn technical skills alongside critical analysis of how technology shapes communities and lives. This approach produces results. For instance, one Girls Who Code student utilized the skills she learned to create an app called AIFinTech to help immigrant families manage their personal finances. Second, women must be represented in AI development and governance, particularly those from historically underserved communities. They need seats at the tables where AI systems are designed, tested, and regulated. This means ensuring gender diversity on AI ethics boards and that government AI committees are representative of the demographics most affected. Finally, how we evaluate artificial intelligence needs to evolve. Today, AI is assessed by efficiency, accuracy, and profitability. We must also evaluate health, equity, and well-being, especially for girls and young women. Before an AI system is deployed in a high-stakes environment such as healthcare, it should be required to pass tests for gender bias and demonstrate that it does not produce disparate outcomes. New York City, for example, requires employers that use automated employment decision tools to undergo an independent bias audit annually. We do not have to accept AI’s flaws by default. We are witnessing AI’s impact on girls in real time, and we must seize the opportunity to change course while the technology is still being shaped. When girls are given the chance to lead in AI, they will build safer systems not just for themselves, but for everyone.
Verticals
designtech
Originally published on Fast Company Tech on 2/27/2026