Quebec Law 25 AI Compliance

Quebec Law 25 AI Compliance | Chager.org

Quebec Law 25 AI Compliance

Complete 25-point compliance checklist for AI systems under Quebec’s privacy law

$25,000,000
Maximum penalty for Law 25 violations
What is Quebec Law 25?
Quebec’s Act respecting the protection of personal information in the private sector (Law 25) is one of the world’s strictest privacy laws. It requires businesses using AI tools to get explicit consent, provide transparency about automated decisions, and implement strong data protection measures. Violations can result in fines up to $25M or 4% of global revenue.
0 of 25 completed · 0% compliant
Personal Information Protection
Get explicit consent for AI processing
Before putting any personal information (customer names, emails, purchase history) into AI tools like ChatGPT or Claude, you must get clear, specific written consent from the individual.
Example consent: “We use AI tools to analyze your purchase history to provide better product recommendations. Your data may be processed by third-party AI services. Do you consent?”
Risk: Critical
Penalty: $50K – $10M
Follow data minimization for AI training
Only collect and use the minimum personal information necessary for your AI system’s specific purpose. Don’t feed extra customer data into AI tools “just in case.”
Good practice: Use only customer purchase categories for AI recommendations, not full purchase histories with personal details.
Risk: High
Penalty: $25K – $5M
Create data retention schedules
Decide how long you’ll keep personal information used in AI systems and AI-generated outputs. Write down clear timelines and follow them.
Example policy: “Delete customer data from AI tools after 90 days. Keep AI-generated marketing content for 2 years maximum.”
Risk: Medium
Penalty: $15K – $1M
Update privacy notices for AI
Your privacy policy must specifically mention AI processing, what AI tools you use, and how personal information is used for automated decision-making or profiling.
Must include: “We use AI tools including [tool names] to process your personal information for [specific purposes]. These tools may create profiles or make automated decisions affecting you.”
Risk: High
Penalty: $25K – $3M
Document legitimate interests
If you can’t get consent, you can process personal information for “legitimate interests” – but you must document why your business need overrides individual privacy rights.
Example documentation: “We use AI for fraud detection because preventing financial crime protects all customers and is required by law.”
Risk: Medium
Build privacy into AI systems
Design privacy protections into your AI workflows from the start, not as an afterthought. Use privacy-preserving techniques like data anonymization where possible.
Privacy by design: Remove names and addresses before feeding customer data into AI tools. Use customer IDs instead of names.
Risk: Medium
Automated Decision Making
Require human review for AI decisions
If AI tools help make decisions that significantly affect people (hiring, loans, insurance, pricing), a human must review and approve each decision. Fully automated decisions are largely prohibited.
Required process: AI tool screens job applications → Human reviewer makes final hiring decision → Document both AI recommendation and human reasoning.
Risk: Critical
Deadline: March 2025
Penalty: $100K – $25M
Provide AI decision explanations
When someone asks how an AI-assisted decision affecting them was made, you must explain the logic, criteria, and significance of the automated processing in plain language.
Example explanation: “Our AI considered your credit history, income, and debt ratio. It flagged concerns about recent missed payments, which our human reviewer confirmed when declining your loan application.”
Risk: Critical
Penalty: $75K – $15M
Allow opt-out from AI decisions
Give people clear options to opt out of automated decision-making processes and request human-only review of decisions affecting them.
Opt-out option: “You can request that your application be reviewed by a human without AI assistance. Contact us at [email] to request human-only review.”
Risk: High
Penalty: $50K – $8M
Document AI decision criteria
Keep detailed records of what factors your AI systems consider, how decisions are made, and what consequences those decisions have for individuals.
Document: AI model version, input data types, decision thresholds, human review process, appeals procedure.
Risk: Medium
Test for and fix AI bias
Regularly test your AI systems for discriminatory bias against protected groups (race, gender, age, etc.) and implement fixes when bias is detected.
Testing process: Analyze AI decisions by demographic groups. If one group is unfairly rejected more often, adjust the AI model or add human oversight.
Risk: High
Security and Governance
Complete AI privacy impact assessments
Before implementing AI systems that process personal information, conduct formal Privacy Impact Assessments (PIAs) to identify and mitigate privacy risks.
PIA includes: What personal data is processed, privacy risks, security measures, legal compliance, mitigation strategies.
Risk: High
Penalty: $40K – $5M
Implement AI security controls
Deploy technical and administrative safeguards to protect personal information processed by AI systems from unauthorized access, use, or disclosure.
Security measures: Encryption, access controls, secure API connections, audit logs, employee training, incident response plans.
Risk: High
Create AI incident response procedures
Develop specific procedures for handling AI-related privacy breaches and security incidents, including notification requirements to authorities and affected individuals.
Response plan: Incident detection, containment, investigation, notification timeline, remediation steps, prevention measures.
Risk: Medium
Ensure vendor AI compliance
All AI service providers (OpenAI, Anthropic, Google, etc.) must have contractual obligations to comply with Law 25 when processing your data.
Contract requirements: Data Processing Agreements (DPAs), Canadian data residency, security standards, breach notification, audit rights.
Risk: High
Document AI governance structure
Maintain comprehensive documentation of how AI systems are governed, who makes decisions, and how accountability is ensured across your organization.
Governance docs: AI committee structure, decision-making authority, oversight processes, escalation procedures, performance monitoring.
Risk: Medium
Use data anonymization techniques
Where possible, use anonymization or pseudonymization techniques for AI training data to reduce privacy risks while maintaining data utility.
Techniques: Replace names with IDs, aggregate data, remove direct identifiers, use differential privacy, synthetic data generation.
Risk: Low
Individual Rights and Compliance
Enable data portability
Provide mechanisms for individuals to obtain their personal information processed by AI systems in a structured, commonly used, machine-readable format.
Data export: Customer profile data, AI-generated insights about them, decision history, in formats like CSV, JSON, or PDF.
Risk: Medium
Penalty: $30K – $2M
Allow AI data corrections
Establish clear processes for individuals to request corrections to personal information used in AI systems and training datasets.
Correction process: Online form, verification of identity, investigation of accuracy, update AI training data, notification of changes made.
Risk: Medium
Implement data erasure capabilities
Build the technical capability to delete individual records from AI training datasets when legally required (right to be forgotten).
Technical solution: Database tracking, model retraining procedures, verification of deletion, documentation of erasure completion.
Risk: High
Handle AI-related complaints
Create clear, accessible procedures for handling complaints related to AI processing of personal information, including decision appeals.
Complaint system: Dedicated contact info, response timeline, investigation process, resolution options, escalation to regulators.
Risk: Low
Maintain AI processing records
Keep detailed records of all AI processing activities involving personal information, including purposes, data sources, recipients, and retention periods.
Record keeping: Processing register, data flow maps, legal basis documentation, retention schedules, transfer records.
Risk: Medium
Assign AI privacy officer duties
Designate someone responsible for AI privacy governance, compliance oversight, breach response, and serving as the contact point for regulators.
Responsibilities: Monitor AI compliance, conduct privacy training, investigate breaches, update policies, liaise with authorities.
Risk: Low
Conduct regular compliance audits
Schedule periodic internal audits to assess ongoing compliance with Law 25 requirements for all AI systems and processes.
Audit checklist: Consent mechanisms, data minimization, security controls, individual rights, vendor compliance, incident response.
Risk: Low
Secure cross-border data transfers
Ensure adequate protection when transferring personal information to AI systems located outside Quebec, including to US-based AI services.
Protection measures: Standard contractual clauses, adequacy decisions, binding corporate rules, certification schemes.
Risk: High
Penalty: $35K – $4M
Assess societal AI impacts
Evaluate broader societal and ethical impacts of your AI systems beyond privacy, including fairness, transparency, and potential for discrimination.
Impact assessment: Algorithmic fairness testing, bias audits, stakeholder consultation, ethical review, mitigation strategies.
Risk: Medium
Penalty: $20K – $1.5M
0%
Law 25 Compliance Score
Explore All AI Resources