Unmasking Bias in AI and What We Can Do About It

Unmasking Bias in AI and What We Can Do About It

Unmasking Bias in AI and What We Can Do About It

Here's a possible scenario. You apply for a job, and your résumé never reaches human hands. A machine scans it, flags something (your name, your zip code, your previous employer, your college) and quietly moves you to the bottom of the list. No feedback. No recourse. No one even knows it happened.

That’s the negative potential of AI at work. And if we’re not paying attention, bias isn’t just being repeated in our systems. It’s being scaled.

Bias in artificial intelligence is a reflection. A mirror held up to our society’s existing inequities, but encoded in something most people can’t see, let alone question. And when bias goes unchecked in AI, it doesn’t just mirror injustice it multiplies it.

What Is AI Bias, Really?

AI bias happens when an algorithm produces results that are systematically unfair, especially toward people based on race, gender, age, ability, geography, or socioeconomic background. And that bias often doesn’t start with the code—it starts with the data.

Most AI systems are trained on historical data. But history, as we know, is not neutral. It’s full of redlining, discrimination, underrepresentation, and harm. When you train AI on that kind of data, it learns patterns—and reproduces them.

As AI researcher Joy Buolamwini of the Algorithmic Justice League famously said, “If you have a system that is trained on biased data, even if your code is clean, the outcome will still reflect the bias” (source).

Real-World Consequences of Biased AI

Bias in AI isn’t just theoretical—it has material consequences for real people in health care, policing, education, finance, and hiring.

1. Facial Recognition Fails

Research from the MIT Media Lab found that commercial facial recognition systems misidentified dark-skinned women 35% of the time, compared to less than 1% for light-skinned men (source). These tools have already been used in policing, leading to false arrests and unjust surveillance.

2. Healthcare Disparities

A 2019 study published in Science found that an AI tool used to recommend health interventions was less likely to refer Black patients for treatment, even when they were sicker than white patients. The reason? The system used past health care costs as a proxy for need, ignoring the structural disparities that had kept costs lower for Black patients (source).

3. Hiring Algorithms

In 2018, Amazon scrapped an AI recruiting tool after it was found to be biased against women. The system had been trained on résumés from the previous 10 years, which were predominantly from men—so the AI learned to downgrade résumés with terms like “women’s” or women’s colleges (source).

Where Bias Comes From

  • Historical Data
    AI learns from the past. If the past was discriminatory, the future will be too—unless we intervene.
  • Lack of Diverse Development Teams
    When people building AI systems come from similar backgrounds, blind spots grow. Inclusion at the table matters—not just morally, but mathematically.
  • Opaque Systems (a.k.a. Black Boxes)
    Many AI systems don’t show how decisions are made. Without transparency, it’s almost impossible to audit bias or challenge outcomes.
  • Optimization Over Equity
    AI is often trained to maximize efficiency or profit. But what’s “efficient” for a system can be dehumanizing for a person.
So, What Can Be Done?

AI doesn’t have to be biased. But fairness doesn’t happen by accident—it happens by design.

1. Build Inclusive Data Sets

Organizations must audit and diversify the data used to train algorithms. This includes removing proxies like zip codes, which often correlate with race and income, and ensuring underrepresented groups are fairly included.

2. Include Communities in the Design Process

“Nothing about us without us” applies to technology too. Equity requires co-creation. Involve the people most impacted by AI in its design, testing, and governance.

3. Demand Transparency and Accountability

Organizations using AI should disclose how decisions are made, what data is used, and how bias is mitigated. This isn’t just good ethics—it’s good business.

4. Regulate with Equity in Mind

We need policies that ensure AI systems meet standards for fairness, explainability, and accountability. Initiatives like the EU’s AI Act and the U.S. Blueprint for an AI Bill of Rights are early steps (source)—but enforcement and grassroots input are critical.

5. Educate the Public

We all need to be part of this conversation. AI is shaping everything from job applications to court decisions. If people don’t understand how it works—or who it works for—then they can’t push back when it fails.

Final Thoughts: A Mirror and a Window

AI isn’t magic. It’s a mirror that reflects our systems, and a window into the future we’re building.

If we embed today’s biases into tomorrow’s machines, we will scale inequality faster than ever before. But if we build AI that centers justice, transparency, and human dignity, we have a chance to do something rare in history—design systems that correct for harm, not just repeat it.

That’s where equity leaders come in. Your voice is essential in shaping how technology sees us, treats us, and remembers us. This isn’t a moment to opt out. It’s a moment to opt in with clarity, courage, and collaboration.

At Buoyant, we help organizations use storytelling, strategy, and emerging tech to center people in a rapidly evolving world. Let’s work together to ensure AI doesn’t just predict behavior, but protects humanity.

Unmasking Bias in AI and What We Can Do About It

article Author

Author's role

March 31, 2025

Article Published

No items found.