Canadian and International AI Regulations

Here's a comprehensive list of the key Canadian and international AI regulations, standards, and guidelines, each accompanied by a brief description and a direct link for more information:

Artificial Intelligence and Data Act (AIDA)

○ Description: Canada's proposed legislation aimed at promoting the responsible use of AI and protecting Canadians, forming part of Bill C-27.

○ Link: Artificial Intelligence and Data Act

Personal Information Protection and Electronic Documents Act (PIPEDA)

○ Description: Canada's federal privacy law governing how private-sector organizations handle personal information.

○ Link: PIPEDA

Office of the Privacy Commissioner of Canada (OPC) AI Guidelines

○ Description: Guidelines provided by the OPC on privacy and the responsible development and use of AI.

○ Link: OPC AI Guidelines

Canadian Centre for Cyber Security (CCCS) AI Security Guidance

○ Description: Security guidance for AI systems provided by Canada's authority on cybersecurity.

○ Link: CCCS AI Security Guidance

Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (2023)

○ Description: A code introduced by the Government of Canada to promote safe and responsible development of generative AI systems.

○ Link: Voluntary Code of Conduct

ISO/IEC 42001 – AI Governance Standard

○ Description: An international standard providing guidelines for the governance of AI within organizations.

○ Link: ISO/IEC 42001

ISO/IEC 23894 – AI Risk Management Standard

○ Description: An international standard outlining risk management principles and guidelines specific to AI.

○ Link: ISO/IEC 23894

ISO/IEC 27001 – Information Security Management System (ISMS)

○ Description: A widely adopted international standard for information security management systems.

○ Link: ISO/IEC 27001

NIST AI Risk Management Framework (USA)

○ Description: A framework developed by the National Institute of Standards and Technology to manage risks associated with AI.

○ Link: NIST AI RMF

MITRE ATLAS – Adversarial Threat Landscape for AI Systems

○ Description: A knowledge base detailing adversarial threats to machine learning systems.

○ Link: MITRE ATLAS

SOC 2 Type II Certification (for AI Service Providers)

○ Description: A certification standard for service organizations, focusing on controls relevant to security, availability, processing integrity, confidentiality, and privacy.

○ Link: SOC 2 Type II Certification

OECD AI Principles

○ Description: Principles adopted by the Organization for Economic Co-operation and Development to promote AI that is innovative and trustworthy and that respects human rights and democratic values.

○ Link: OECD AI Principles

EU AI Act

○ Description: The European Union's proposed regulatory framework for AI, aiming to ensure AI systems are safe and respect existing laws on fundamental rights and EU values.

○ Link: EU AI Act

Privacy Commissioner’s Principles for Responsible AI

○ Description: Principles set by Canada's Privacy Commissioner to guide the responsible development and use of AI.

○ Link: Privacy Commissioner's Principles

AI Impact Assessment (AIIA) Guidelines

○ Description: Guidelines provided by the Government of Canada for assessing the impact of AI systems.

○ Link: AI Impact Assessment Guidelines

AI Safety Guidelines – Canadian AI Safety Institute (CAISI)

○ Description: Guidelines developed by the Canadian AI Safety Institute to promote the safe development and deployment of AI systems.

○ Link: CAISI AI Safety Guidelines

NCSC UK – Guidelines for Secure AI System Development (2023)

○ Description: Guidelines from the UK's National Cyber Security Centre on developing secure AI systems.

○ Link: NCSC UK AI Guidelines

Directive on Automated Decision-Making (DADM)

Description: A mandatory policy for federal institutions in Canada using automated decision systems. Requires the use of an algorithmic impact assessment and sets transparency and accountability expectations.

Link: Directive on Automated Decision-Making

Algorithmic Impact Assessment (AIA) Tool

Description: Canada’s official tool to assess the potential impact of an AI system used by federal institutions. Required under the DADM and aligned with responsible AI use.

Link: Algorithmic Impact Assessment Tool

ISO/IEC 38507 – Governance of IT and Implications of AI

Description: An international standard providing high-level guidance for organizations on how to manage AI within their existing IT governance structure.

Link: ISO/IEC 38507

Toronto Declaration – Protecting Human Rights in Machine Learning Systems

Description: A declaration co-led by Canadian civil society organizations to ensure AI systems uphold international human rights standards, especially in sensitive use cases like policing and hiring.

Link: Toronto Declaration

UNESCO Recommendation on the Ethics of Artificial Intelligence

Description: A global ethical framework adopted by over 190 member states, including Canada, focusing on dignity, transparency, sustainability, and fairness in AI.

Link: UNESCO AI Ethics

Canadian Human Rights Act (AI-related use)

Description: This federal law prohibits discrimination in services, housing, and employment — including AI systems that may reinforce bias or automate unfair treatment.

Link: Canadian Human Rights Act

CyberSecure Canada Certification

Description: A voluntary cybersecurity certification program by the Government of Canada to help SMEs demonstrate strong cyber hygiene and protect AI systems and data.

Link: CyberSecure Canada

GC Cloud Guardrails (Government of Canada Cloud Infrastructure Standards)

Description: Guidelines created for secure cloud use in Canadian federal institutions, including hosting of sensitive data and AI models on cloud infrastructure.

Link: GC Cloud Guardrails

These resources provide detailed information on various aspects of AI governance, risk management, security, and ethical considerations, both within Canada and internationally.

en_USEnglish