Framework / Guideline | Does Canada Follow or Align? | Notes |
---|---|---|
Bill C-27 (AIDA / CPPA)
[More] | In Progress | Canada’s upcoming AI and privacy legislation. AIDA governs high-impact AI systems; CPPA will replace PIPEDA. |
PIPEDA (Personal Information Protection and Electronic Documents Act)
[More] | Yes | Current federal privacy law still in force. Governs how private-sector organizations collect, use, and disclose personal info. |
OPC AI Guidelines (Office of the Privacy Commissioner of Canada)
[More] | Yes | Official guidance on privacy, automated decision-making, AI transparency, and fairness. |
CCCS AI Security Guidelines (incl. ITSAP.00.041 – Generative AI Security)
[More] | Yes | Provides national cybersecurity recommendations for safe AI development, including GenAI-specific guidance. |
Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (2023)
[More] | Yes | Introduced by the Government of Canada; signed by major tech companies. Promotes safe, transparent, and responsible GenAI. |
ISO/IEC 42001 – AI Governance Standard
[More] | Yes | Canada supports this international governance framework via the Standards Council. Establishes management systems for AI. |
ISO/IEC 23894 – AI Risk Management Standard
[More] | Yes | Global best-practice standard for AI risk identification, mitigation, and accountability. |
Screen ReISO/IEC 27001 – Information Security Management System (ISMS)
[More] solution | Yes | Widely adopted in Canada. Supports secure AI and data practices. Often required for compliance in public and private sectors. |
NIST AI Risk Management Framework (USA)
[More] | Alignment Only | U.S.-based framework, not Canadian law, but widely referenced in Canada for responsible AI risk management. |
Framework / Guideline | Does Canada Follow or Align? | Notes |
---|---|---|
MITRE ATLAS – Adversarial Threat Landscape for AI Systems
[More] | Alignment Only | Global framework from MITRE used to understand and defend against threats to AI systems. Not Canadian, but referenced. |
SOC 2 Type II Certification (for AI Service Providers)
[More] | Voluntary Alignment | U.S.-based audit framework, but often pursued by Canadian AI/tech providers for trust and compliance signaling. |
OECD AI Principles
[More] | Yes | Canada is a formal signatory (2019). These guide national AI ethics, fairness, transparency, and safety. |
EU AI Act (European Union AI Risk Framework)
[More] | Reference Only | Not applicable in Canada but referenced by many Canadian firms for international compliance. |
Privacy Commissioner’s Principles for Responsible AI
[More] | Yes | Core Canadian guidelines promoting fairness, transparency, privacy, and accountability in AI systems. |
AI Impact Assessment (AIIA) Guidelines
[More] | Yes | Recommended by the Canadian federal government to assess ethical, privacy, and social impact of AI systems. |
AI Safety Guidelines – CAISI (Canadian AI Safety Institute)
[More] | Yes | Canadian-specific guidelines for secure and responsible AI system development. Align with global best practices. |
NCSC UK – Guidelines for Secure AI System Development (2023)
[More] | Yes | Canada is a co-signer. Promotes secure-by-design development principles for AI systems. |
Directive on Automated Decision-Making (DADM) | Yes (Gov’t use) | Mandatory for federal AI systems. Helps assess risk. |
Framework / Guideline | Does Canada Follow or Align? | Notes |
---|---|---|
AIA Tool (Algorithmic Impact Assessment) | Yes | Public sector AI risk scoring tool. Required by DADM. |
ISO/IEC 38507 – Governance of IT & AI | Yes | Helps boards and executives govern AI within IT structures. |
Toronto Declaration | Yes (Canada co-led) | Protects human rights in ML systems. Strong ethics tool. |
UNESCO AI Ethics | Yes | International guidance adopted by many states, including Canada. |
Canadian Human Rights Act (AI-related) | Yes | Protects individuals from discrimination caused by automated systems. |