Compliance

New AI Hiring Laws Are Here — And More Are Coming. Is Your Screening Process Ready?

Illinois and Texas AI hiring laws took effect January 2026. Colorado's follows in June. Here's what talent acquisition teams need to know about compliance, disclosure, and AI governance.

Compliance AI Hiring Laws Governance

As of January 1, 2026, talent acquisition teams using AI for hiring in Illinois or Texas face new legal requirements, and Colorado's comprehensive AI law takes effect June 30. While Washington debates whether states should regulate AI at all, the laws are already on the books.

Here's what HR leaders need to know, and why the companies already thinking about AI governance have a significant head start.

THE NEW LANDSCAPE

The New Landscape

Three major state laws now govern, or will soon govern, how employers can use AI in hiring decisions:

Illinois (HB 3773) — Effective January 1, 2026

Illinois amended its Human Rights Act to explicitly prohibit AI-powered discrimination across the employment lifecycle: not just hiring, but recruitment, promotion, training, discipline, discharge, and the terms and conditions of employment. The law requires employers to notify applicants and employees when AI is used in any of these decisions.

Because HB 3773 amends the Illinois Human Rights Act, individuals can pursue claims through the IHRA's existing enforcement framework: filing a charge with the Illinois Department of Human Rights, and if necessary, pursuing civil litigation in circuit court. This makes Illinois one of the few states where employees have a legal pathway to challenge AI-driven employment discrimination.

The law also prohibits using zip codes as a proxy for protected classes, closing a common backdoor for algorithmic discrimination.

Texas (TRAIGA) — Effective January 1, 2026

Texas took a decidedly lighter approach with its Responsible Artificial Intelligence Governance Act. Signed by Governor Abbott in June 2025, TRAIGA prohibits the intentional use of AI to discriminate against protected classes, but its requirements on private employers are minimal.

Unlike earlier versions of the bill that would have required impact assessments and risk management programs, the enacted law imposes no disclosure requirements on private employers, no mandatory bias audits, and no transparency obligations for hiring AI. Critically, the law defines "consumer" to exclude individuals in an employment context, meaning its consumer protection provisions don't directly apply to hiring.

Disparate impact alone is not sufficient to establish a violation — TRAIGA requires proof of discriminatory intent. There is no private right of action; enforcement rests exclusively with the Texas Attorney General, with a 60-day cure period before penalties apply. The law also created the Texas Artificial Intelligence Council, though this seven-member advisory body is expressly prohibited from issuing binding rules or guidance.

For employers, the practical impact of TRAIGA is limited: don't intentionally use AI to discriminate, and keep documentation of your legitimate business purposes in case questions arise.

Colorado (SB 24-205) — Effective June 30, 2026

Colorado's law is the most comprehensive, and the most contested. Originally scheduled for February 1, 2026, the effective date was pushed to June 30, 2026 after a special legislative session in August 2025 failed to reach consensus on amendments. Further revisions remain possible during the 2026 legislative session.

When it takes effect, the law will require employers deploying "high-risk" AI systems to conduct annual impact assessments, maintain risk management policies, notify consumers when AI will influence consequential decisions (including employment), and provide an opportunity to appeal adverse AI-driven decisions with human review where technically feasible.

Violations constitute unfair trade practices under Colorado's Consumer Protection Act, carrying penalties of up to $20,000 per violation — counted separately for each affected consumer. The state attorney general has exclusive enforcement authority under the AI Act itself, though existing anti-discrimination and common law claims remain available to individuals.

Given the Trump administration's explicit criticism of Colorado's law and the ongoing possibility of amendments, employers should prepare for compliance while monitoring this space closely.

FEDERAL VS STATE

The Federal–State Tension

On December 11, 2025, President Trump signed Executive Order 14365, "Ensuring a National Policy Framework for Artificial Intelligence," aimed at establishing uniform federal AI standards and pushing back against state-level regulation. The order created an AI Litigation Task Force within the Department of Justice, charged with challenging state AI laws the administration deems inconsistent with federal policy. It also directed the Commerce Department to evaluate state laws and conditionally restrict federal broadband funding to states with "onerous" AI legislation.

But here's what matters for HR teams right now: the executive order doesn't, and can't, invalidate existing state laws. An executive order applies only to federal agencies; overturning state legislation requires either an act of Congress or a court ruling.

"A federal framework that preempts the state-level patchwork would be ideal, but appears unlikely." — Niloy Ray, employment attorney at Littler, speaking to HR Dive

The practical implication? Employers need to comply with state requirements regardless of federal policy direction.

IMPLICATIONS

What This Means for Hiring AI

For talent acquisition teams using AI-powered tools for screening, interviewing, or candidate evaluation, these laws create three immediate obligations:

1. Disclosure requirements

Illinois requires explicit notice when AI is used in employment decisions. Colorado will require similar notice when its law takes effect in June. Even in Texas, where private employer disclosure isn't mandated, documenting your AI practices creates a defensible record.

2. Documentation and audit trails

Colorado's impact assessment requirement, once effective, will mean documenting how your AI tools work, what data they use, and how you're managing discrimination risk. Even in states without explicit requirements, this documentation protects if questions arise.

3. Human oversight mechanisms

Colorado will require appeal mechanisms for AI-driven adverse decisions. California's Civil Rights Council regulations, effective October 2025, clarify that existing anti-discrimination law applies to automated decision systems used in employment and establish employer accountability for vendor-operated AI — including four-year record retention requirements. The common thread across jurisdictions: AI can't be a black box that makes final decisions without human review.

GOVERNANCE ADVANTAGE

The Governance Advantage

Companies that have already built AI governance frameworks aren't scrambling right now. They're simply documenting what they already do.

The employers best positioned for this regulatory environment share common characteristics. They can explain exactly how their AI tools work. They maintain clear audit trails of AI-influenced decisions. They have published policies governing AI use in employment. And they've built human oversight into their processes from the start.

This isn't just about compliance. It's about building defensible hiring practices in an era where AI discrimination lawsuits are becoming more common. The ongoing collective action in Mobley v. Workday — where a federal court granted conditional certification of a nationwide class of potentially hundreds of millions of job applicants alleging Workday's AI screening tools had a disparate impact on workers over 40 — demonstrates that courts are willing to examine how AI tools contribute to discriminatory outcomes, and that AI vendors themselves may face liability.

THE VIRVELL APPROACH

The Virvell Approach

We built Virvell with governance in mind from day one. Not because we anticipated these specific laws, but because we believe AI in hiring requires accountability by design.

Our approach includes:

No candidate scoring or ranking. Our AI collects information and surfaces patterns. It doesn't assign numerical scores, generate hiring recommendations, or automatically advance or remove candidates. These decisions belong to humans.

Published AI acceptable use policy. Our policy is publicly available at virvell.ai/ai-acceptable-use. We document exactly what our AI does and doesn't do, and we invite scrutiny.

Cross-module intelligence with human review. When our system detects discrepancies across pre-screens, references, and background checks, it flags them for human review. It doesn't make the call.

Complete audit trails. Every AI-assisted data point is documented and traceable.

This approach isn't a limitation. Based on the regulatory direction states are taking, it's what responsible AI in hiring looks like.

ACTION ITEMS

What to Do Now

If you're using AI in any part of your hiring process, here's what to prioritize:

Audit your current tools. Do you know exactly how each AI system works? What data does it use? What decisions does it influence?

Document your processes. Even if your state doesn't require it yet, build the audit trail now. Colorado's requirements take effect in June, and other states are watching.

Review vendor relationships. Are your AI vendors transparent about how their tools work? Can they support compliance documentation? The Workday litigation shows that "the vendor handles it" is not a viable compliance strategy.

Implement human oversight. Ensure AI recommendations are reviewed by trained humans before they affect employment decisions. This is already best practice and is rapidly becoming a legal requirement.

Communicate with candidates. Build disclosure into your process now, before it's legally required everywhere.

The regulatory patchwork will likely expand. More states are considering AI employment legislation, and the tension between state action and federal pushback remains unresolved. The companies that treat governance as a feature, not a burden, will be ready regardless of what comes next.


See how governed AI screening works in practice

Virvell automates pre-screen interviews, reference checks, and background verification in one governed platform. Our AI collects information. Your team makes decisions.

Book a Demo

Virvell automates pre-screen interviews, reference checks, and background verification in one governed platform. Our AI collects information. Your team makes decisions. Learn more at virvell.ai.

About the Author

Julien Gagnier is the founder and CEO of Virvell, the conversational AI platform for talent teams. With 15+ years of HR leadership experience at Honda Canada, Later, Microart, and Unisys, Julien brings practitioner insight to the intersection of AI and hiring. He holds a CHRL designation and an MBA from Schulich School of Business.