As of January 1, 2026, Ontario employers with 25 or more employees face new requirements for job postings and hiring practices. Among them: mandatory disclosure when artificial intelligence is used to screen, assess, or select candidates.
This makes Ontario the first Canadian province to legislate AI transparency in hiring, joining a growing list of US jurisdictions implementing similar rules. For talent acquisition teams, the question isn't whether to disclose AI use anymore. It's how to build processes that make disclosure meaningful.
What the Law Requires
The changes come from Ontario's Working for Workers Four Act (Bill 149), which amends the Employment Standards Act. The AI disclosure requirement is part of a broader package of job posting reforms — introduced across Bills 149 and 190 — that also includes pay transparency, vacancy status disclosure, and a prohibition on requiring "Canadian experience."
For AI specifically, the law requires:
Disclosure in job postings. Any publicly advertised job posting must include a statement indicating whether AI is used to screen, assess, or select applicants. The disclosure must appear in the posting itself — but notably, the statute does not extend this requirement to associated application forms. (By contrast, the Canadian experience prohibition explicitly covers both postings and application forms, indicating this was a deliberate legislative choice.)
Broad definition of AI. The ESA regulations (O. Reg. 476/24) define AI as "a machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments."
This definition is intentionally broad. It captures not just obvious AI tools like chatbots or automated screening systems, but potentially any algorithmic system that influences hiring decisions, including applicant tracking systems with ranking features, automated resume parsers, or assessment tools that generate scores.
Who's Covered
The requirements apply to employers with 25 or more employees at the time a job is publicly posted. They apply to "publicly advertised" postings, meaning external job listings advertised to the general public. Excluded are internal postings limited to existing employees, general recruitment campaigns not tied to a specific position, and postings for work performed outside Ontario.
If you're posting jobs externally in Ontario and you have 25+ employees, you need to comply.
What Disclosure Looks Like
The law requires disclosure, not a technical explanation. Ontario's official ESA guide confirms that employers need not provide a detailed description of the AI system or how it's used. It is enough to state that AI is used to screen, assess, or select applicants.
Employment law firm SpringLaw suggests language like: "We use AI-enabled tools to sort applications based on job-related criteria. A human decides who moves forward." Clear, accurate, and compliant.
The practical challenge is knowing what counts as AI in your hiring stack. If your ATS automatically ranks candidates, that's likely AI. If your pre-screening tool scores applicants, that's AI. If you're using any system that makes predictions or recommendations about candidates, the safest assumption is that disclosure is required.
The Broader Context
Ontario's AI disclosure requirement reflects a growing trend toward transparency in algorithmic hiring.
In the US, Illinois implemented an AI hiring law on January 1, 2026 that amends its Human Rights Act to prohibit AI-powered discrimination across the employment lifecycle and requires employer notification. Colorado's comprehensive AI law, which includes annual impact assessments and risk management requirements, takes effect June 30, 2026. California's Civil Rights Council issued regulations clarifying that existing anti-discrimination law applies to automated decision systems in employment, effective October 2025. And New York City has required bias audits for automated employment decision tools since 2023.
The pattern is clear: regulators increasingly expect employers to be transparent about how AI influences hiring decisions. Ontario is now part of that movement.
What's notable about Ontario's approach is its simplicity. Unlike Colorado's law, which requires impact assessments, risk management policies, consumer notification, and appeal mechanisms, Ontario focuses purely on disclosure. The obligation is straightforward: tell candidates if AI is involved.
This creates a relatively low compliance burden, but it also raises the bar for what candidates and regulators will expect. Once you've disclosed that AI is part of your process, questions about how that AI works, what it does, and whether it's fair become legitimate.
Beyond Compliance: What Disclosure Signals
Mandatory disclosure creates a market signal. Candidates will now see which employers use AI in hiring and which don't. Over time, they'll form opinions about what AI use means, whether it's a sign of efficiency and modernity, or a concern about algorithmic bias and impersonal treatment.
This means disclosure isn't just a compliance checkbox. It's a statement about your hiring process that candidates will interpret.
Organizations with thoughtful AI governance can use disclosure as a positive signal. "Yes, we use AI, and here's how we ensure it's fair, transparent, and human-supervised." That's a different message than "Yes, we use AI" with no further context.
The employers who will benefit most from transparency requirements are those who can explain not just that they use AI, but why and how.
What About Enforcement?
Bill 149 does not create standalone penalties for AI disclosure violations. Instead, the new posting requirements are enforced through the existing ESA framework administered by the Ontario Ministry of Labour.
Penalties escalate across tiers: administrative penalties start at $250 per contravention for a first offence and increase to $500 for a second and up to $5,000 for third or subsequent violations. Under the Provincial Offences Act, set fines of $295 plus surcharges can apply. For prosecuted offences, fines can reach $100,000 for individuals and $500,000 for corporations on a third or subsequent conviction.
No public guidance yet exists on how aggressively the Ministry will enforce the new job posting requirements, but the penalty framework is real.
What Ontario Employers Should Do Now
If you haven't already, here's what to prioritize:
Audit your hiring tools. Identify every system that might qualify as AI under the ESA's broad definition. This includes ATS features, screening tools, assessment platforms, and any system that scores, ranks, or makes recommendations about candidates.
Update job posting templates. Add standard disclosure language to all publicly advertised positions. Keep it simple and accurate.
Train your team. Hiring managers and recruiters should understand what AI tools are being used and be prepared to answer candidate questions.
Retain your records. The ESA requires employers to retain copies of every publicly advertised job posting and any associated application form for three years after the posting is removed from public access.
Document your processes. While Ontario doesn't require the detailed documentation that Colorado does, building an audit trail now positions you well as regulations evolve.
Consider your governance. Disclosure is the minimum requirement. Organizations that can explain their AI governance, what the AI does, what it doesn't do, and how humans remain in the decision loop, will be better positioned as candidate expectations increase.
The Virvell Approach
We built Virvell with transparency as a core principle, before these regulations existed.
Our AI Acceptable Use Policy is publicly available at virvell.ai/ai-acceptable-use. It documents exactly what our AI does and doesn't do:
What our AI does: Conducts conversational pre-screen interviews. Facilitates voice-based reference checks. Collects, organizes, and compares information across screening modules. Flags discrepancies for human review.
What our AI doesn't do: Score or rank candidates. Generate hiring recommendations. Make employment decisions. Automatically advance or reject applicants.
This approach means our customers can make disclosure statements with confidence. They know exactly how AI is used in their screening process, because we've documented it transparently.
When candidates ask follow-up questions, and increasingly they will, our customers can explain that AI collects information but humans make decisions. That's a defensible position in any regulatory environment.
The Direction of Travel
Ontario's disclosure requirement is likely the beginning, not the end, of AI hiring regulation in Canada.
At the federal level, the most significant legislative effort, the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, died on the order paper in January 2025 when Parliament was prorogued. Canada currently has no federal AI-specific law governing the private sector, though the federal government has taken non-binding steps: the Treasury Board's Directive on Automated Decision-Making (covering government use since 2019), a Voluntary Code of Conduct on Generative AI with over 55 signatories, and the creation of the Canadian AI Safety Institute. A new Ministry of Artificial Intelligence and Digital Innovation was established in May 2025.
Quebec's Law 25 also requires organizations to inform individuals when decisions are made exclusively through automated processing, though this is narrower and privacy-based rather than a hiring-specific transparency requirement.
Other provinces may follow Ontario's lead. And as AI becomes more prevalent in hiring, public pressure for transparency and accountability will increase.
Organizations that build AI governance now, not just for compliance but as a genuine commitment to fair and transparent hiring, will be better positioned regardless of how regulations evolve.
Disclosure is the minimum. Governance is the advantage.
See how governed AI screening works in practice
Virvell automates pre-screen interviews, reference checks, and background verification in one governed platform. Our AI collects information. Your team makes decisions.
Book a DemoVirvell automates pre-screen interviews, reference checks, and background verification with AI governance built in. Our AI collects information. Your team makes decisions. Learn more at virvell.ai.