Mobley vs Workday: AI Isn’t Neutral; It’s Trained on Our Biases (Part II of IV)

 
 

If you hire a consultant to fire someone for you, the decision is still yours. If your attorney sends a cease-and-desist letter, you’re still the one behind the message. Whether it’s a contractor, a proxy, a middleperson, or an algorithm, the responsibility doesn’t vanish. You can outsource the action, but the outcome still belongs to you.

Now swap out that consultant or attorney with a powerful AI tool scanning thousands of résumés. Your company selected it, maybe outsourced, bought, and deployed it. It acts on your behalf. So, if it screens out qualified people based on patterns of past exclusion, the responsibility still falls on you.

That’s the crux of Mobley v. Workday. As we discussed in the first part, the lawsuit has recently made headlines, cutting through the comfort of “AI objectivity” that companies often shield themselves with. As the case moved on, something became very clear to the recruiting world: it’s now legal to say that bias doesn’t disappear when we automate it. It evolves. It multiplies. It learns to hide inside clean dashboards and neat recommendation scores.

Derek Mobley applied for more than 100 jobs since 2017, all through employers using Workday’s hiring software. He’s a Black veteran over 40 with a background in IT and finance. He also lives with anxiety and depression. He never got a callback. Some rejections came so fast, it was clear no human had even glanced at his résumé. The software made the call, and it made it over and over.

However, Mobley’s complaint doesn’t accuse anyone of writing discriminatory code on purpose. The case is all about how AI learns from biased data, from hiring decisions that, year after year, favor younger, White, non-disabled candidates. Nothing new. It’s just the reflection of what often happens in the real world. This time, instead of a hiring manager or a recruiter, AI picked up those patterns. It followed the rules of yesterday. And it left many qualified candidates out.

Here’s what this reveals: because we don’t fully see how AI learns and acts on the data, it’s never a clean slate. There’s an age-old computer acronym people used to learn: GIGO—“Garbage In, Garbage Out”—and it still holds true, even for AI. If you feed gold (clean, non-biased data) into AI, it can return unbiased results. But if you feed it biased data, it learns those patterns and produces more of the same. That remains true even if neither you nor the vendor realizes the data is flawed. AI reflects whatever we give it. And if the data is soaked in bias, the outcomes will be too. That’s how Machine Learning works.

Training AI on raw historical data without removing its bias is like teaching a new chef to cook by watching a bad kitchen. If every past recipe was bland or burned, the new hire is going to serve the same mess, just faster.

So, what’s the fix? As clear as it is, we can’t solve this with better intentions. We need better questions. What exactly is the algorithm learning? And who does it consistently forget?

Start with audit. But don’t limit the AI audit to functionality. Audit for impact. Does it reject more candidates from certain racial groups? Does it prefer younger applicants for roles? Do those patterns align with legally protected characteristics like race, age, or disability? That’s the threshold we need.

Next, cut the cord between performance and preference. That means asking: is this screening method tied to real job performance, or just to past practices? If the system penalizes someone for not resembling a prior hire, it’s not measuring skill but recycling bias.

This is where the principle of equity comes back into view. In The Equity Edge, I tackles this head-on in the chapter “Unquestioned Answers.” I explain how systems repeat what they’ve already been taught, unless someone interrupts the cycle. Unless we force a break from the past, we won’t get a better future, just a faster rerun of old exclusions.

At a time when equity is feared or misunderstood, there’s a way to build it without setting off alarms. Want to know how? The Equity Edge is your guide. Start here.

Remember that Mobley’s case doesn’t rest on proving that Workday meant to discriminate. It rests on showing that Workday’s tools did discriminate, repeatedly, measurably, and unfairly. And that alone, under Title VII, is enough to trigger legal consequences.

In the end, AI won’t save us from bias unless we do the saving. We have to unlearn, untrain, and redesign. Because smart systems are still our systems. And what they act, they act in our name.

Ready for Part III?
Stay tuned. We’re going to talk about why AI vendors now don’t get a free pass when the tools they sell screen people out.

Jennifer Tardy