Privacy Update

By Esther Shainblum and Cameron A. Axford

Jan 2024 Charity & NFP Law Update
Published on January 31, 2024



Privacy Commissioners Announce Principles for Development and Use of Generative AI

On December 7, 2023, Canada’s federal, provincial and territorial privacy commissioners, announced new principles (“Principles”) for the responsible development and use of generative artificial intelligence (“AI”).

The Principles are intended to address the potential risks of this new technology by helping developers/providers and organizations using generative AI to apply Canadian privacy principles and to ensure the fairness of their systems. The use of AI can amplify bias, thus resulting in discriminatory outcomes, and can also expose children to harm. Therefore, the Principles are intended to help developers and organizations using AI to mitigate the risks to vulnerable populations through protective measures such as privacy impact assessments.

The Principles largely track the ten fair information principles discussed frequently in this publication, and are as follows:

1. Legal Authority and Consent

Ensure legal authority for collecting and using personal information; when consent is the legal authority, it should be valid and meaningful.

The generation or inference of identifiable information by a generative AI system will be considered a collection of personal information, which would require legal authority. Consent to the collection and use of personal information should be valid, meaningful and documented.

If personal information is obtained from third parties, one must ensure that the third parties collected it lawfully and have appropriate authority to disclose it. In sensitive contexts like healthcare, consent may be inadequate and privacy and ethics may also need to be considered, under independent oversight.

2. Appropriate Purposes

Collection, use and disclosure of personal information should only be for appropriate purposes, i.e. reasons that a reasonable person would consider appropriate in the circumstances.

Responsible use of personal information in generative AI involves aligning with appropriate purposes, avoiding “no-go zones” that lead to unfair or unethical outcomes. Developers should conduct adversarial testing to identify and mitigate unintended inappropriate uses. Organizations using generative AI must comply with privacy laws and monitor for inappropriate uses or biased outcomes. Emphasis is on avoiding unlawful collection, unfair profiling, and activities causing harm, with a commitment to cease any violative generative AI system activity. Adherence to the Principles ensures ethical and lawful deployment of generative AI systems, safeguarding against potential risks and discriminatory practices.

3. Necessity and proportionality

Establish the necessity and proportionality of using generative AI, and personal information within generative AI systems, to achieve intended purposes.

Responsible use of generative AI involves establishing the necessity and proportionality of using personal information to achieve intended purposes. Preference should be given to using anonymized, de identified or synthetic data when personal information is not required. Organizations must assess the validity and reliability of the generative AI as well as its necessity and effectiveness across its lifecycle. Organizations should consider alternative privacy-protective technologies. This principle safeguards against unnecessary use of personal information and promotes responsible practices, protection of privacy, and exploring alternatives to generative AI use.

4. Openness

Be open and transparent about the collection, use and disclosure of personal information and the potential risks to individuals’ privacy.

Ensuring transparency in generative AI use requires clear communication about personal information throughout the system's lifecycle. All parties should specify the what, how, when, and why of data use, providing understandable information to the intended audience before, during, and after system use. and inform users about privacy risks and mitigations in public-facing tools. The Principles stress open communication, promoting understanding, and ensuring informed use of generative AI systems.

5. Accountability

Establish accountability for compliance with privacy legislation and principles and make AI tools explainable.

Developers and users are responsible for compliance with privacy legislation and should be able to demonstrate compliance. This principle requires a clearly defined governance structure for privacy compliance, a mechanism to receive and respond to privacy complaints and the use of privacy impact assessments, auditing and vulnerability testing to mitigate against potential impacts of the AI on privacy and other fundamental rights. Developers should be able to explain how a system works and a rationale for how outputs are derived. If this is not possible, the use of AI may not be appropriate.

6. Individual Access

Facilitate individuals’ right to access their personal information by developing procedures that enable it to be meaningfully exercised.

Upholding individuals' right to access personal information in generative AI requires establishing procedures for meaningful exercise of this right. Processes should allow access to and correction of information collected during system use. Mechanisms for accessing or correcting personal information in AI models, are crucial. Organizations using generative AI, especially in decision-making, can facilitate transparency and accountability by maintaining records to facilitate fulfillment requests for access to information related to those decisions.

7. Limiting Collection, Use, and Disclosure

Limit the collection, use, and disclosure of personal information to only what is needed to fulfill the explicitly specified, appropriate identified purpose.

The collection and use of personal information for AI tools should be limited to what is necessary for the intended purpose and anonymized or de-identified data should be used as much as possible.

Users should also establish appropriate retention schedules, for personal information, avoid function creep by using personal information only for the purpose for which it was collected and avoid indiscriminate collection of personal information.

8. Accuracy

Personal information must be as accurate, complete and up-to-date as necessary for the purposes for which it is to be used.

This Principle emphasizes accuracy to ensure responsible and effective use of generative AI.

Any personal information used to train generative AI models should be as accurate as necessary for the purposes and should be updated when information becomes out of date or inaccurate. Users should take reasonable steps to ensure that any outputs from a generative AI tool are accurate as necessary for the purpose, especially if those outputs are used to make or assist in decisions about an individual or individuals, will be used in high-risk contexts, or will be released publicly.

Accuracy issues may render a generative AI system inappropriate, particularly in contexts with significant impacts on individuals.

9. Safeguards

Establish safeguards to protect personal information and mitigate potential privacy risks.

This Principle underscores the importance of safeguarding personal information, being aware of potential threats, and ensuring responsible and secure use of generative AI systems.

Users of generative AI must protect personal information by implementing safeguards appropriate to the sensitivity of the data throughout the tool's lifecycle and by being aware of and mitigating possible threats to the data.

 Products and services should be designed to prevent inappropriate use of AI as well as the creation of illegal or harmful content. Users must monitor use of the AI to detect and prevent inappropriate uses and threats.

As AI becomes more integrated into modern workplaces and social settings, individuals and organizations will need to be aware of the legal implications of using these new technologies, whether as a provider or a client.

Ontario Introduces New Administrative Monetary Penalties for Mishandling of Personal Health Information 

In Ontario, privacy in the health care sector is governed by the Personal Health Information Protection Act, 2004 (“PHIPA”). PHIPA applies to all health information custodians (HICs) in the province, including health care providers, clinics, institutions such as hospitals, long term care and retirement homes, pharmacies, laboratories and other persons who have custody or control of personal health information (PHI) as a result of or in connection with performing their duties.

As of January 1, 2024, Section 61.1 of PHIPA and its accompanying regulation [O. Reg. 329/04, s. 35] came into force. They allow the Information and Privacy Commissioner of Ontario (“IPC”) to impose administrative monetary penalties (“AMPs”) on organizations or individuals who violate PHIPA or its regulations. According to the IPC’s guidance on AMPs (the “Guidance”), having the ability to impose AMPs will provide the IPC with greater flexibility and are part of a toolkit of escalating actions and interventions that it can use to address contraventions of PHIPA.

Up to now, the IPC would have needed to refer offences to the Attorney General of Ontario for prosecution and the imposition of fines These new provisions will allow the IPC to impose the AMPs directly.

According to the Guidance, the IPC will take a measured and proportionate approach to each contravention and the AMPs will not be used in “cases involving unintentional errors or one-off mistakes… provided there is evidence of prompt and reasonable corrective action being taken upon discovery of the error”. AMPs are to be used for intentional, malfeasant actions, such as snooping into patient records, contraventions for economic gain or deliberate violations of an individual’s right of access to their own PHI. There may be situations in which an AMP is not appropriate, such as where an organization is a victim of a cyberattack, despite having reasonable and appropriate safeguards in place.

AMPs can be as high as $50,000 for individuals and $500,000 for an organization. However, the IPC can levy higher penalties in instances where a violator has monetarily benefited from their misuse of PHI, to prevent them from deriving any economic benefit from violating PHIPA. In determining the amount of an AMP, the IPC must consider specific criteria alongside any other relevant factors. These criteria include evaluating the degree to which the contravention deviates from the requirements of the PHIPA or its regulations, the extent to which the person could have prevented the contravention, the extent of any harm or potential harm to others resulting from the contraventions whether any steps were taken to mitigate or remediate the harm, the number of affected individuals and entities, whether any steps were taken to notify the IPC and affected individuals, the extent to which the person derived economic benefits from the contravention, and whether there were any past contraventions of PHIPA or its regulations by the person in question.

The IPC may refer the most severe cases to the Attorney General for prosecution where there is evidence of an offence having been committed. An individual found guilty of committing an offence under PHIPA can be liable for a fine of up to $200,000, up to one-year imprisonment, or both. An organization can be liable for a fine of up to $1,000,000.

Many HICs are charities and not for profit organizations. They should take steps to put in place robust privacy policies and practices, including audits and oversight of staff and volunteers, to minimize the risk of exposure to AMPs or fines for contraventions of PHIPA.


​ ​Read the January 2024 Charity & NFP Law Update