Mobley vs. Workday: What Happens When AI Acts Like an Agent (Part III of IV)

 
 

U.S. employment law holds employers responsible for discriminatory hiring practices, even when those decisions are made through a third party. That’s one of the reasons many companies hand off their early-stage screening to AI software vendors and recruiting agencies. It keeps the hiring engine running and, until recently, offered a kind of legal cushion.

But that cushion may be slipping away.

In Mobley v. Workday, a federal court acknowledged that although software vendors cannot be held liable as an “employment agency,” they can be considered as “agents” under the agency theory and, therefore, may be liable for decisions to procure employees for employers. In other words, when a software product helps decide who moves forward in the hiring process, that product, and the company behind it, can carry legal accountability.

Derek Mobley is a Black veteran over 40 with a solid background in IT and finance. Since 2017, he has applied to more than 100 jobs through systems that used Workday’s software. He never received a callback. He later filed a lawsuit against Workday, alleging that its AI failed to screen applicants fairly and played a central role in hiring decisions, including rejections. From the outset, it looks like the AI likely learned from years of biased hiring patterns, quietly favoring some candidates while filtering others out.

The fact that the court allowed the case to move forward (as a collective action) on the theory that Workday acted as an “agent” of its client employers marks a major shift in how AI hiring tools will be viewed going forward. If a software product performs the early selection or if it rejects candidates or ranks them in ways that influence human decisions, it’s no longer just a technical tool. It can be held liable for discrimination.

This idea has wide reach.

Any company offering AI-driven screening software, resume filters, personality assessments, or ranking engines could soon be held to similar standards. Tools that recommend, reject, or prioritize applicants may now sit in the same legal territory as a recruiter or a contractor. And if those tools lead to unfair outcomes for protected groups based on race, age, gender, or disability, both the employer and the vendor could face real consequences.

This legal development pushes hiring leaders to ask tougher questions: Who designed the algorithm? What data was it trained on? What decisions is it making, and what impact do those decisions have? Are those decisions surfacing talent fairly or repeating past exclusions? These are the baseline questions every party involved—employers, agencies, AI vendors, or any other agents—must answer before making hiring decisions on anyone’s behalf.

These questions extend far beyond hiring managers. They apply to procurement teams, legal counsel, diversity officers, and executive leaders. If your organization uses third-party tech to evaluate people, you are part of the system that decides who gets seen.

How Does Equity Fit In?

Whether or not you choose to support increasing diversity, you cannot rely on discrimination to justify your company policies. However you define “equity”—or whether you support it at all—you still cannot treat people unfairly based on protected characteristics. And in the end, there’s only one clear way to avoid discrimination: make equity a core part of your hiring ecosystem.

Discrimination happens when systems treat people unequally. But it also happens when systems treat people equally but inequitably. Imagine two job candidates applying for the same role. One went to a top university, had internships through family connections, and worked jobs that built their résumé. The other is just as talented but went to a less-known college, worked two jobs to support family, and didn’t have the same résumé polish because they didn’t have the same access. If you treat them equally, using a résumé-scanning AI trained on past “top performer” profiles, it will likely favor the first candidate every time. That’s not because the second candidate is unqualified. It’s because the system equates privilege with potential.

This is how treating people “the same” can still lead to discrimination. This form of discrimination often shows up not directly as race, gender, or religion but through factors like socioeconomic background, historical access to opportunity, or the prevalence of professional networks—all of which closely correlate with protected characteristics. Equal treatment without equity just rewards the advantages some people had to begin with. Equity means adjusting the process so it sees real talent, not just polished résumés.

Equity asks what barriers exist, who is being excluded, and how to correct them. Without equity, bias built into history, data, or process keeps repeating itself. Legally and logically, equity is the most plausible way to prevent discrimination because it’s the only approach that actively works to remove disadvantage.

I cover this in The Equity Edge with clarity and force. Equity is not something that belongs to HR departments or training sessions. It must run through the entire hiring ecosystem. That includes the vendors you trust, the tools you implement, and the assumptions that power both. Your entire hiring infrastructure either expands access or limits it. And if your software can’t explain how it sees talent, you may already be at risk.

So, before your next hiring tech investment, take a closer look at what your vendors promise and what your systems actually do. Once the screening begins, the responsibility stays with you.

Jennifer Tardy