This article has been saved to your Favorites!

4 State AI Bills To Watch In 2nd Half Of 2024

By Vin Gurrieri · 2024-07-10 17:04:30 -0400 ·

After Colorado recently moved to the forefront of regulating artificial intelligence in the workplace, numerous other states across the ideological spectrum — including conservative bastions like Oklahoma — are considering legislation of their own. 

Scale balancing the letters A and I denoting artificial intelligence

Other states are following Colorado's lead in enacting legislation regulating artificial intelligence in the workplace. (iStock.com/Dragon Claws)

In May, Colorado placed a prominent stake in the ground when Gov. Jared Polis, albeit hesitantly, signed into law S.B. 205, which directs companies that develop or use a "high-risk artificial intelligence system" to take reasonable measures to avoid algorithmic discrimination. 

The Centennial State's law defined high-risk AI systems as any that "makes or is a substantial factor in making a consequential decision." Some specific requirements for companies that use such systems — called "deployers" in the law's parlance — include adopting a risk management policy that governs use of that technology that is reviewed regularly, periodically completing an "impact assessment" of the AI system, and taking "reasonable care" to safeguard against "known or reasonably foreseeable risks" of algorithmic bias.

California has pending legislation of its own that experts say may be consequential, but a wide swath of other states, including in traditionally conservative jurisdictions, aren't shying away from entering the mix.

"It's not clear to me that efforts to regulate workplace AI are necessarily tied to a state's ideological viewpoint," said Adam Forman, co-leader of the AI practice group at Epstein Becker Green.

While bills and laws may vary across states, Forman said consistent through-lines that have started to emerge include a general alignment with guidelines unveiled by the Biden administration in October 2022 for safeguarding civil rights when using AI as well as concepts baked into the European Union's expansive new AI law.

Those elements include ensuring that technology is tested and is free of bias. Other concepts that could cut across AI legislation in different states are requirements for companies to afford those subject to AI technology the ability to opt out and mechanisms that govern how data is stored or used by employers, he said.

"In some fashion, you're going to be required to show a lack of bias, and it's going to be a transparent requirement where you're going to have to share it in a public fashion of some sort, and you're going to have to do it regularly and repeatedly," Forman said. "I think that can be consistent through-line regardless of whether it's progressive or conservative states."

Here, Law360 looks at four bills to regulate the use of AI in the workplace that bear watching in the second half of 2024.

Ill. Bill Would Codify "Impact Assessments" 

Introduced in February in the Illinois General Assembly, H.B. 5116, or the Automated Decision Tools Act would in part require so-called deployers of automated decisionmaking tools to conduct impact assessments on the technology.

Those assessments, which the bill requires be done every year, must test the automated tool's "potential adverse impacts" and outline the "safeguards implemented" by a deployer to address "any reasonably foreseeable risks of algorithmic discrimination" that the deployer knows about at the time the assessment is done.

Automated decision tools covered under H.B. 5116 would include systems that use artificial intelligence and that were developed to "be a controlling factor in making, consequential decisions." The bill specifically cites employment-based decisions, like those regarding hiring, firing, pay or task allocation, as the sort that would be covered under the term's definition.

Other key elements of the bill, which would also apply to education, housing and other contexts, include notification requirements and a mandate that deployers have a "governance program" in place containing "reasonable administrative and technical safeguards" to manage AI-related bias risks. 

If the bill is enacted in its current form, impact assessments would have to be completed by 2026 and another done every year thereafter. The state's attorney general would have the authority to enforce violations of the law in court, and private individuals starting in 2027 would have the ability to lodge suits alleging algorithmic bias.

David Walton, chair of Fisher Phillips' artificial intelligence team, said Illinois in recent years has been at the forefront of enacting privacy protections and safeguards on the use of technology, including the Artificial Intelligence Video Interview Act.

That law, which took effect in 2020, regulates the use of AI to analyze videos submitted by job applicants, and H.B. 5116 could build on those and other safeguards that the Prairie State has codified, he said.

"Illinois has been fairly aggressive in adopting state legislation on AI," Walton said. "Based upon Illinois' history, I wouldn't be surprised if they passed something too, and they try to take the lead on this."

Mass. Eyes Key Role for Regulators

In Massachusetts, a bill is pending that similarly seeks to regulate employers' use of "automated decision systems" or algorithms that are used to take employment-related actions.

The bill, H.B. 1873, titled "An Act preventing a dystopian work environment," was introduced last year and is currently pending in the state legislature. It would impose a requirement on employers to notify workers that an automated decision system, or ADS in the bill's vernacular, will be used and mandate that companies regularly conduct impact assessments of automated systems they use and their impact on workers.

If enacted, it would also mandate that employers not rely solely on the results of an ADS when setting wages, and that a company "conduct its own evaluation" of a worker independent of the automated tool before any hiring, firing, termination or disciplinary decisions are made.

That includes "establishing meaningful human oversight by a designated internal reviewer" to "corroborate" the results of the ADS, according to the bill.

Marjorie C. Soto Garcia, a partner at McDermott Will & Emery LLP, said a developing trend in AI-related legislation is the incorporation of provisions that empower state agencies or state attorneys' general to be notified about the results of bias audits. The bill in Massachusetts highlights that growing pattern, she said. 

"In Massachusetts, for instance, we have House Bill 1873, which is pending. And this one is notable [because] it requires the disclosure of use of [automated employment decisionmaking tools] not only to employees but also to the state labor department," Soto Garcia said.

NJ Could Bar Sales of AI-Infused Products

In New Jersey, lawmakers in February floated a bill — A.3854 — aimed at combating workplace bias through automated employment decisionmaking tools.

The bill, which is pending before the New Jersey State Assembly, includes provisions requiring companies that use AEDTs to post on their websites summaries of the most recent bias audit they conducted of the technology and imposes various notification requirements.

A unique element of the bill is that it would bar AEDTs from being sold or used in the Garden State unless it has been subjected to a bias audit within a year of it being sold or used and the developer has implemented any recommendations from the most recent audit.

Soto Garcia said New Jersey's bill "does mirror some aspects" New York City's Local Law 144, which took effect in 2023. That law requires employers that use AEDTs to audit them for potential discrimination, publicize the results of those audits and alert workers and job applicants that such tools are being used.

But New Jersey's bill "takes things a bit farther" by preventing the sale of the tools unless specified criteria is met, according to Soto Garcia.

Soto Garcia said the bill is "interesting in that it goes even farther than just, 'OK, I'm an employer, I've decided to use this AEDT [and] here's all the notice that I need to provide."

"So even before you're purchasing this technology, the vendor needs to kind of make sure that they have … gone through this process before they even sell you the technology," she said.

Okla. Bill Enumerates Citizens' Rights Pertaining to AI

Though progressive states are moving quickly to at least consider legislation that regulates AI-infused technology, conservative states are also seemingly receptive to legislation in the AI space.

One such state is Oklahoma, where H.B. 3453 lays out a list of eight rights for people in Oklahoma with respect to artificial intelligence.

That list includes the right of people in the state to be notified when they are interacting with an AI system intelligence rather than a human "in an interaction where consequential information is exchanged."

Other rights that would be established if the bill is enacted are people having to be notified if legally binding documents are generated entirely by an AI system absent any human review, and if people are being exposed to AI-generated images or text that are presented in a way could reasonably be believed they were human-made. 

The bill also proposes to outlaw algorithmic discrimination based on any legally protected category.

Finally, H.B. 3453 includes a multipronged definition of "artificial intelligence." Covered systems are those that perform tasks "under varying and unpredictable circumstances without significant human oversight" or can learn from the data sets it is fed, among other elements.

The bill passed the state's House of Representatives in March and is currently pending in the Senate.

Walton of Fisher Phillips said Oklahoma's foray into potentially regulating artificial intelligence shows the issue isn't one that will just be confined to progressive-leaning states.

"It's in Oklahoma, you know, that's a pretty conservative place," Walton said. "So this is an issue that I think crosses the political aisles."

--Additional reporting by Amanda Ottaway, Anne Cullen, Allison Grande, Benjamin Morse and Alex Davidson. Editing by Amy Rowe and Nick Petruncio.

For a reprint of this article, please contact reprints@law360.com.