Ready to code your way to a new career? Our next cohort start date is September 22nd!

July 15, 2025

Breaking the Algorithm: How Women Are Reshaping AI Development

Breaking the Algorithm: How Women Are Reshaping AI Development

Published by Coyotiv for the Diva Conference: “Dive into AI

When Joy Buolamwini first discovered back in 2017 that facial recognition software couldn’t detect her dark skin until she put on a white mask, she uncovered a truth that would reshape how we think about artificial intelligence. This moment at MIT’s Media Lab wasn’t just a technical glitch, it was a revelation about who gets to build the future and whose voices are heard in the algorithms that increasingly govern our lives.

Today, as AI systems influence everything from hiring decisions to healthcare diagnoses, women technologists are leading the charge to ensure these powerful tools work fairly for everyone. Their work isn’t just about fixing code; it’s about fundamentally reimagining how we approach AI development with inclusion, ethics, and human dignity at its core.

The Hidden Bias in Our Machines

The numbers tell a stark story. A study between the Berkeley Haas Center for Equity, Gender and Leadership and the Stanford Social Innovation Review analysed 133 AI systems across different industries and found that about 44 per cent of them showed gender bias, and 25 per cent exhibited both gender and racial bias. But behind these statistics are real-world consequences that affect millions of people daily.

Consider the recruitment AI systems that consistently rank male candidates higher than equally qualified women, or the healthcare algorithms that misdiagnose conditions in women because they were trained primarily on male patient data. These aren’t edge cases, they’re systematic failures that reveal the urgent need for more diverse perspectives in AI development.

The root of the problem lies in both the data and the developers. According to 2019 estimates from UNESCO, only 12 percent of AI researchers are women, and they “represent only six percent of software developers and are 13 times less likely to file an ICT (information, communication, and technology) patent than men.” It’s 2025 and that number has now only risen to 18%. When the teams building AI systems lack diversity, the blind spots become features, not bugs.

Pioneers Breaking New Ground

The women leading the fight against AI bias aren’t just researchers, they’re at the forefront in changing how we think about technology’s role in society. Dr. Joy Buolamwini, founder of the Algorithmic Justice League, has become a global voice for ethical AI through her groundbreaking research exposing bias in facial recognition systems. Her work with the Gender Shades project revealed that commercial facial analysis programs had error rates of up to 34.7% for dark-skinned women, compared to just 0.8% for light-skinned men.

Dr. Timnit Gebru, whose research collaboration with Buolamwini helped expose these disparities, has continued to push boundaries in AI ethics research. Her work on the social implications of large language models has shaped industry conversations about responsible AI development.

Chinasa T. Okolo, recently named one of Time’s 100 most influential people in AI, focuses on how AI systems impact the Global South, ensuring that solutions work for diverse populations worldwide. Her research at the Brookings Institution examines how AI governance can be more inclusive and equitable.

These women all share a common approach: they don’t just identify problems, they build solutions that center human welfare and dignity.

Reshaping AI from Within

The impact of women in AI extends far beyond individual research projects. They’re fundamentally changing how organizations approach AI development. Companies are beginning to implement what researchers call “fairness-aware machine learning”, development practices that actively test for and mitigate bias throughout the AI lifecycle.

At MIT, researchers have developed new debiasing techniques that maintain accuracy while improving fairness for underrepresented groups. These approaches don’t just patch existing systems; they reimagine how AI models learn and make decisions.

The influence extends to corporate practices as well. Tech companies are increasingly adopting ethical AI frameworks that prioritize diverse testing, inclusive design processes, and ongoing bias monitoring. This shift represents a move from reactive fixes to proactive inclusive design.

The Algorithmic Justice Movement

What makes the current moment unique is how women technologists are organizing collectively to address AI bias. The Algorithmic Justice League, founded by Buolamwini, exemplifies this movement. Their mission is clear: “Fight algorithmic bias with us. We want the world to remember that who codes matters, how we code matters, and that we can code a better future.”

This movement combines technical expertise with social activism, creating a new model for how technologists can engage with the broader implications of their work. These women aren’t just building better algorithms, they’re building a more equitable future where technology serves all people, not just those who look like the people who built it.

Building More Inclusive AI Systems

The solutions emerging from this work are as diverse as the problems they address. Researchers are developing new methods for creating more representative training datasets, building in fairness constraints during model development, and creating better evaluation metrics that measure AI performance across different demographic groups.

One promising approach involves what researchers call “intersectional AI”, which are systems that consider how different forms of bias interact and compound. Instead of treating gender bias and racial bias as separate issues, these systems recognize that a Black woman’s experience with an AI system might be different from both a white woman’s and a Black man’s experience.

The technical innovations are matched by institutional changes. Organizations are implementing diverse hiring practices for AI teams, establishing ethics boards with diverse membership, and creating feedback loops that allow affected communities to influence AI development processes.

The Path Forward

As we look toward the future, the work of women in AI bias research offers a roadmap for more inclusive technology development. Their approach demonstrates that diversity isn’t just a moral imperative, it’s a technical necessity for building AI systems that work for everyone.

The companies and organizations that embrace this approach will build better, more robust AI systems. They’ll avoid the costly mistakes of biased algorithms and the reputational damage that comes with discriminatory technology. Most importantly, they’ll contribute to a future where AI enhances human potential rather than perpetuating existing inequalities.

For those attending the Diva Conference and beyond, the message is clear: the future of AI depends on diverse voices being heard throughout the development process. Women technologists have shown us that breaking the algorithm isn’t just about fixing code, it’s about building a more just and equitable world.

The algorithms of tomorrow will be shaped by the choices we make today. Thanks to the pioneering work of women researchers and activists, we have both the tools and the framework to build AI systems that work for everyone. The question isn’t whether we can create fair AI, it’s whether we have the will to do so.

As we continue to “Dive into AI,” let’s remember that the most powerful algorithm we can build is one that centers human dignity and works for all people, regardless of their background or identity.