Analysis

This article has been saved to your Favorites!

AI Guidance About-Face Shouldn't Alter Employers' Approach

By Anne Cullen · 2025-01-28 15:09:42 -0500 ·

The U.S. Equal Employment Opportunity Commission and U.S. Department of Labor recently scuttled online resources advising employers on how to curb the risk of workplace discrimination when they use artificial intelligence tools, but experts said that doesn't mean companies should change their game plans.

Man in suit holds up document

The EEOC and DOL pulled down guidance about the potential for AI tools to exacerbate workplace discrimination after President Donald Trump ordered a review of Biden-era AI policies. (AP Photo/Ben Curtis)

The EEOC removed a stack of AI-related documents from its website Monday that had warned about the potential for these high-tech tools to exacerbate workplace discrimination, and the DOL withdrew a best practices playbook last week that had called on developers and employers to be transparent with workers about the tools being used and conduct audits to check for biased outcomes.

The moves followed an executive order Thursday from President Donald Trump, titled "Removing Barriers to American Leadership in Artificial Intelligence," that decried "onerous and unnecessary government control" over AI systems. The order directed government agencies to review any policies crafted in accordance with a now-defunct AI directive by former President Joe Biden that had prioritized responsible development of algorithmic and other automated tools.

Olga Akselrod, a senior counsel in the American Civil Liberties Union's racial justice program who specializes in algorithmic discrimination in employment, said Trump's actions on AI "send a clear signal that federal AI policy under this administration will be to supercharge the development of investment into AI without guardrails to protect people from the real-world harms of these technologies."

Cooley LLP special counsel Joseph Lockinger, a management-side litigator whose work includes advising employers on AI tools, said the EEOC's backpedaling on AI "represents the removal of multiple years of work to create a framework for employers seeking to comply with federal law when using AI tools in the employment lifecycle."

However, despite the new hands-off approach, Lockinger and other management-side attorneys told Law360 that this doesn't mean workplaces are the wild west when it comes to AI. Federal anti-discrimination laws, Title VII of the Civil Rights Act and the Americans with Disabilities Act are still in play, Lockinger said.

"That's the background to all of this for employers. The guidance that was issued by the federal agencies largely just said that, to a certain extent," he said. "None of them were saying we're going to create new laws. They were basically saying the existing federal laws applying to employers apply when they use AI tools."

And he said that's still true.

"The reality is that most of the employment laws currently in place will continue to cover AI to the extent they implicate an employment-related concern, such as bias," Lockinger said.

Among the documents the EEOC pulled down was a 2023 fact sheet that enumerated a host of algorithmic decision-making tools the agency said could disproportionately harm one group of workers or applicants, including resume scanners, employee monitoring software, virtual assistants, chatbots and AI-infused tests or games used in the application process to assess candidates.

The agency explained that these machine learning systems do have to answer to Title VII's prohibition on employment discrimination based on race, color, religion, sex and national origin and offered guidelines for companies to evaluate whether an AI program is disadvantaging a certain group.

That page disappeared alongside an earlier disability-focused iteration, in which the commission explained that algorithmic and AI tools deployed in the workplace can violate the ADA.

In that fact sheet, issued in 2022, the commission recommended that employers interested in an algorithmic decision-making tool should first seek out details about a tool's accessibility options for people with disabilities, as well as any steps taken by the vendor to determine if the algorithm unfairly flags qualities in candidates that are associated with disabilities.

The document the DOL retracted was a 17-page blueprint for AI developers and employers that prioritized worker empowerment as the North Star in the development and deployment of such technology.

Aside from championing transparency and regular checks for discriminatory outcomes, the DOL had called for the responsible use of any employee data gathered by an AI tool and recommended companies specifically assess the impact these programs could have on workers and job applicants with disabilities.

Landing pages the EEOC and DOL had crafted to gather AI resources have also been taken offline.

Akselrod of the ACLU said the EEOC's now-rescinded guidance "provided critical information to employers" and "reminded workers of their rights under the laws," and removing it "will increase uncertainty and legal risk for employers and exacerbate existing harms to workers," she said.

Matt Scherer, senior policy counsel for workers' rights and technology at the Center for Democracy and Technology, said the rapid pivot on AI policy is "unprecedented" and that businesses may be just as "nervous" about the rollback as civil rights advocates are.

"I was a management-side employment lawyer before I came to CDT, and I know that businesses look to technical assistance and other informal guidance from federal agencies when formulating their policies and procedures, particularly when dealing with emerging issues that courts have not yet addressed — like the use of AI in employment decisions," he said. "Companies aren't used to experiencing whiplash like this between administrations."

Although these guidance documents are no longer endorsed by the federal government, Fisher Phillips partner David J. Walton, a management-side lawyer who chairs the firm's AI team, agreed with Lockinger that companies are still obligated under federal law to keep their high-tech tools in check.

"It doesn't necessarily impact what employers should be doing. They should be developing guardrails on their own," Walton said. "Title VII already prevents the use of biased tools. Biden's EO didn't change that, and Trump's EO didn't change that."

Federal agencies like the EEOC and DOL may no longer be leading enforcement efforts on AI misuse, but experts highlighted that job applicants and employees can still go to court.

"The guidance being rescinded may make it less likely that the agencies on their own are going to pursue investigations, but that doesn't mean individual employees aren't going to sue under those laws and not find a receptive ear in the courts," Lockinger said.

In addition, experts said employers should gear up for a reinvigorated effort from the states to fill the void on AI left by the federal government.

"You're going to see a lack of activity on the federal level and the states step in," Walton said. "The states are going to be aggressive on this."

States are already at the forefront of this movement.

A landmark law Colorado Gov. Jared Polis signed in May directs companies that develop or use a "high-risk artificial intelligence system" to take reasonable measures to avoid algorithmic discrimination. It also requires disclosures about the use of an AI system.

And in August, Illinois Gov. JB Pritzker signed H.B. 3773, which amended the Illinois Human Rights Act to make clear that the statute is triggered when discrimination emanates from an employer's use of AI in a broad field of workplace decisions, including those related to hiring, firing, discipline, tenure and training.

The measure also requires companies to notify workers when AI is plugged into this wide array of decisions and bars companies from using ZIP codes in the calculus.

The Colorado and Illinois measures are set to take effect in 2026.

Maryland and New York City have additionally imposed notice requirements on companies that use AI to vet candidates or for other workplace processes. Regulators in California are moving closer to finalizing sweeping new rules that would govern AI usage, and a measure progressing through the Virginia Legislature aims to impose protections similar to those on the books in Colorado.

Epstein Becker Green employment partner Adam S. Forman, who frequently advises businesses on AI in the workplace, said the new administration's deregulatory approach "is only going to add more emphasis, and perhaps even urgency, at the state and local level to regulate the use of workplace AI in those jurisdictions."

"If the federal government does not implement a comprehensive law or regulation for workplace AI tools, I think we're going to see states pick up the pace to fill the gap," Forman said.

Forman said it's a good idea for employers to continue to look to the withdrawn guidance to stay in line with federal and state law.

"They still have value, but the value is more towards ensuring you're complying with existing laws," he said. "And it'll be a good head start, if not getting there all the way, to comply with new laws that are being considered currently."

--Additional reporting by Vin Gurrieri, Daniela Porat, Rae Ann Varona, Patrick Hoff, Amanda Ottaway and Allison Grande. Editing by Aaron Pelc and Emma Brauer.

For a reprint of this article, please contact reprints@law360.com.