519 672 2121
Close mobile menu

Businesses that previously had few obligations under Canadian privacy law may become significantly impacted by new federal legislation.

Bill C-27, introduced on June 16, 2022, seeks to modernize Canada’s now 22-year-old Personal Information Protection and Electronic Documents Act (PIPEDA) with the Consumer Privacy Protection Act (CPPA). If passed, CPPA would strengthen the rights of data subjects and significantly alter the obligations of organizations who collect, use, or disclose personal information. It would also create a new tribunal to handle privacy matters.  

Although much of Bill C-27 is copied from the now defunct Bill C-11 (as discussed in a previous blog post), it contains several minor tweaks and one major addition. Some of the tweaks include expanding the penalty provisions, broadening the powers of the new tribunal, and introducing children’s privacy rights. The major addition is the creation of an entirely new Act regulating artificial intelligence: the Artificial Intelligence and Data Act (AIDA). This article discusses some of the implications that AIDA could have on your business.

Regulated activities under AIDA

If passed, AIDA will regulate the design, development, and use of AI systems. This captures a very broad range of conduct not previously regulated by PIPEDA. For example, a software developer or designer is presently subject to PIPEDA only to the extent that it collects, uses, or discloses personal information. Under AIDA, that same developer may find itself subject to regulation even if they never touch a piece of personal information.

Curiously, the Act mentions personal information only once, and it is in respect to the minister’s ability to disclose personal information. Therefore, it would be a stretch to call AIDA privacy law in the traditional sense. Instead, AIDA would be AI law plain and simple. This is evident in the stated purposes of the Act:

  1. to regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and
  2. to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.

Obligations under AIDA

Entities subject to the new law will be required to take measures to mitigate the risk of harm that may arise in the creation and implementation of AI systems. They will also be required to monitor these measures for compliance and efficacy. “Harm” is broadly defined to include:

  • phycological harm;
  • physical harm;
  • damage to property; and
  • economic loss.

Additionally, entities who make available for use a “high-impact” system—the definition of which will be left to regulation—will be required to publish information relating to:

  • the intended use of the AI system and how it will be used;
  • the types of content that the AI system is intended to generate; and
  • the mitigation measures established to reduce the risk of harm.

These publication obligations will be ongoing and will require businesses to publish any other additional information that may be prescribed by regulation in the future.

Biased output

In addition to targeting harm generally, AIDA would specifically prohibit the implementation or use of any AI system that produces “biased output”. Biased output is defined as any content that adversely differentiates, directly or indirectly and without justification, on any of the prohibited grounds of discrimination. This stipulation will require organizations to take extraordinary care in the design and implementation of AI systems. Consider, for example, a simple advertising algorithm. The algorithm uses AI to respond to the preferences and behaviours of individuals. To the extent that purchasing behaviour varies by group, it would be beneficial for advertisers—and arguably for consumers—to be able to differentiate between those groups (more targeted and more relevant ads). While certain advertising restrictions already pose a barrier to advertisers who want to target specific demographics, the application to AI systems is infinitely more complex because, with an AI system, it is not the advertiser determining the output, it is the algorithm. In our example, the very nature of the AI is to respond to input from users. The output is generated accordingly—that’s the “intelligence” part. If the input varies, the output will too. This would lead to the ironic situation in which organizations will not only have to predict the output of their AI systems, but they will have to bias the AI such that it does not produce “biased output”. In other words, to ensure non-biased output you must bias the output.

Under this output-based regime, it will not matter what the software designer intended. Nor will it matter if the “biased” output is the consequence of genuine consumer preferences and behaviour, as it is a well-established principle in Canadian law that statistical disparities in outcome can be used to infer discrimination. Notably, however, this will only work in one direction. Biased output will not include conduct “the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds”. In simple terms, otherwise discriminatory conduct will be permissible if it is done for the purported benefit of a protected group.

Penalties & offences

If an organization knows or is reckless as to whether its AI system is likely to cause serious harm, and the AI system does cause harm, the organization could be required to pay substantial fines. Penalties for non-compliance would range from 4% of global revenue up to $20,000,000 for summary offences, and 5% of global revenue up to $25,000,000 for indictable offences. Individuals could face up to 5 years imprisonment for indictable offences and two years for summary conviction.

Takeaways for businesses

Be prepared. The previously stable Canadian privacy landscape is changing. Businesses who have become accustomed to the regulatory regime that has governed the past two decades may find themselves scrambling to become compliant with new changes.

Know your obligations. Although these changes would have a significant impact on the legal obligations of private sector organizations, a recent survey published by the Office of the Privacy Commissioner suggests that businesses’ familiarity with privacy-related issues has decreased in many regards since 2019.

Seek legal advice. Not only are regulatory obligations becoming more onerous, so too are the penalties for non-compliance. At the same time, the technologies which the new legislation aims to regulate are becoming more complex and sophisticated. This creates an environment in which mistakes are easy to make and the consequences of making them is high.  

News & Views

Blog

The more you understand, the easier it is to manage well.

View Blog

What to expect at mediation

Your lawyer has told you that the other side has agreed to mediate your case. The date of th…

Important changes to automobile insurance

On October 16, 2024, the Ontario Government confirmed an amendment to the Insurance Act, and…