The New Frontier: Structuring M&A Deals in the Age of Artificial Intelligence

Artificial intelligence has fundamentally changed M&A transactions, yet traditional due diligence frameworks were never designed for probabilistic, evolving AI systems. For Ontario business leaders acquiring AI driven companies, the risks are substantial: IP contamination from third party AI training, data privacy non compliance under PIPEDA, algorithmic bias triggering human rights claims, and regulatory uncertainty following the collapse of Canada’s proposed AIDA legislation. This blog provides a practical playbook for executives. It explains why standard valuations fail when dynamic assets like data rights and specialized talent drive value. It details AI specific due diligence across five pillars including model integrity, supply chain dependencies, and bias audits. It addresses data security threats from GenAI powered attacks and compromised codebases. For cross border deals, it maps overlapping obligations under the EU AI Act and emerging US state laws. Finally, it offers structuring tools such as earnouts, tailored warranties, privacy covenants, and escrows, alongside post closing strategies for talent retention and continuous monitoring. AI driven M&A rewards the prepared. This is your roadmap.

Introduction:

Global M&A activity surged by 41% in 2025 and yet today, the most valuable asset on the table isn’t a factory or a patent. It’s artificial intelligence. Whether you are acquiring a fintech startup in Toronto or a logistics firm that uses AI for route optimization, the deal’s success now hinges on how well you understand the target’s algorithms, data, and model governance. For Ontario business leaders, this shift is especially urgent. Our province is a recognized hub for AI talent and innovation, meaning many of your potential acquisitions are already AI-driven. But here is the challenge: traditional due diligence won’t catch an AI model that has been secretly trained on leaked customer data, nor will it flag a chatbot that inadvertently violates privacy laws. These hidden risks can erode valuation overnight. This article is your practical playbook. We will help you identify critical AI risks, navigate Canada’s evolving regulatory grey zones, and structure deals using tools like earnouts and tailored warranties that protect your investment from day one.

Why Do Traditional M&A Frameworks Fall Short With Ai? 

The core challenge is that AI is not static. Traditional due diligence was built to assess physical assets, historical contracts, and past compliance records. An AI system, by contrast, is probabilistic. It learns, evolves, and can produce different outputs from the same input over time. A valuation model that works for a software license will miss how an AI model was trained or what data was used. What you are really buying has shifted. The heart of a deal is no longer just code or patents. It is dynamic assets like data rights, model training processes, and the specialized talent that maintains them. 

 

For example, a Toronto fintech’s value may depend on its proprietary customer dataset, not just its algorithm. Confirming ownership is no longer enough. AI systems often rely on licensed data, third party models, and open source components. You might own the AI but lack the right to use the data that makes it work. Ontario courts have begun to confront these issues. 

 

In Arc Compute v. Anton Allen1, the Ontario Superior Court addressed allegations of intellectual property theft tied to AI development, highlighting the heightened need for robust AI governance around innovation, especially in competitive commercial environments. The lesson for executives is clear. Recalibrate your valuation models. Do not rely on traditional metrics alone. Dig into data provenance, model dependencies, and talent retention, because those are the true drivers of value in an AI driven transaction.

What Is The Current State Of Ontario’s Ai Regulatory Landscape For M&A Deals?

Federal AI legislation remains notably absent in Canada. The proposed Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27 in 2022, did not pass before the January 2025 prorogation of Parliament. As a result, Canada currently has no binding federal AI law, leaving businesses in a regulatory grey zone. This legislative vacuum creates both opportunities and risks for dealmakers, as the lack of clear rules means uncertainty, but also room to shape emerging best practices.

 

Federal AI legislation is not in force, leaving a regulatory vacuum for dealmakers: The 2025 federal election brought renewed optimism, with the re-elected government signaling potential reintroduction of AIDA in the upcoming parliamentary session. However, until that happens, the absence of a comprehensive national framework means executives must rely on a patchwork of existing laws and guidance documents from regulators like the Office of the Privacy Commissioner of Canada.

 

Existing laws already impose binding obligations on AI systems: Despite the absence of AI specific federal legislation, several binding legal frameworks already apply to AI systems in commercial transactions. The Personal Information Protection and Electronic Documents Act (PIPEDA) governs how private sector organizations collect, use and disclose personal information in the course of commercial activities. For an AI system that processes customer data, such as a Toronto based fintech using machine learning for credit scoring, PIPEDA’s consent and accountability principles apply directly.

 

Human rights legislation also imposes significant constraints. The Canadian Human Rights Act prohibits discrimination, and where an AI hiring tool systematically screens out candidates based on protected grounds, the employer can be held liable regardless of whether the bias was intentional.

 

Ontario has taken its own steps. The Working for Workers Act, 2022 introduced requirements around electronic monitoring policies, which directly affect AI enabled workplace surveillance tools. An Ontario business acquiring a company that uses AI to track employee productivity must ensure those systems comply with provincial disclosure obligations. Human rights and privacy laws already bind AI systems, even without AI specific statutes.

 

Ontario courts are actively shaping AI and privacy enforcement: Ontario courts have begun confronting AI related privacy issues in ways that directly impact M&A due diligence. For an executive acquiring a company that uses third party AI tools, this case sends a clear warning. Your target’s contracts with AI vendors must be scrutinized to ensure they do not permit unauthorized secondary uses of personal information. A failure to do so could result in regulatory findings that carry reputational and financial consequences post closing.

 

The Ontario Court of Appeal has also reinforced the legal risks of data mishandling. In Del Giudice v. Thompson, 2024 ONCA 70 (CanLII)2, the court considered the tort of intrusion upon seclusion in the context of a massive data breach affecting millions of Canadians. While the claim ultimately failed on the specific facts, the case confirmed that data breaches involving personal financial information can give rise to significant litigation exposure.

 

More directly relevant to healthcare AI, in LifeLabs LP v. Information and Privacy Commr. (Ontario), 2024 ONSC 2194 (CanLII)3, the Divisional Court addressed a 2019 cyberattack that exposed personal health data of millions of Canadians. The case underscores that provincial privacy commissioners have robust investigative powers, and targets of AI driven data processing face serious scrutiny even in the absence of comprehensive federal AI legislation.

 

The Business Transaction Exemption is your essential compliance tool for due diligence: When conducting M&A due diligence, exchanging personal information is often unavoidable. The Business Transaction Exemption (BTE) under PIPEDA provides a critical pathway to do so without obtaining individual consent from every data subject, but only if parties strictly follow its requirements.

 

Under PIPEDA, the BTE allows parties to use and disclose personal information without consent provided two conditions are met. First, the parties must enter into an agreement to use the information solely for purposes related to the transaction, protect it with measures appropriate to its sensitivity, and destroy or return it if the deal does not proceed. Second, the personal information must be necessary either to proceed with the transaction or, if the decision is made to proceed, to close it.

 

The Business Transaction Exemption allows no-consent data sharing in M&A, but only if strict conditions are met: Post closing, the exemption continues to apply provided the parties have agreed to use the information for the same purposes for which it was originally collected, protect it appropriately, give effect to any withdrawal of consent, and inform affected data subjects within a reasonable time that the transaction has been completed.

 

Practically, the pre closing undertakings should appear in a pre transaction non disclosure agreement, not in the purchase agreement itself. By the time parties have signed a purchase agreement, the personal information has already been used and disclosed during due diligence. For Ontario executives, this means ensuring your NDA includes specific BTE language before any data room access is granted.

 

Ontario’s tech sector is quietly shaping future policy: Ontario’s position as a leading hub for AI innovation and talent means the province punches above its weight in influencing Canada’s approach to AI governance. The federal government’s guidance documents and policy directions are heavily informed by input from Ontario based companies, academic institutions and industry groups. As federal AI legislation is reintroduced and debated, Ontario’s business community has a genuine opportunity to shape the rules that will ultimately govern AI driven M&A transactions. For now, however, the prudent approach is to assume that existing privacy, human rights and contractual laws apply fully to AI systems, and to conduct due diligence accordingly.

How Can You Conduct Effective AI Specific Due Diligence In An M&A Transaction?

AI acts as a risk multiplier, magnifying traditional legal exposure in ways that standard intellectual property, privacy and cybersecurity reviews simply will not catch. A target might have pristine financial statements but an AI model secretly trained on leaked customer data, or an algorithm that systematically discriminates against protected groups. Your due diligence must therefore go far beyond the conventional checklist.

 

IP and model ownership requires examining training data provenance, output ownership and open source compliance: Under Canadian law, the legal status of AI generated outputs remains unsettled. You need to verify that your target has the right to use every piece of data that went into its models, and that it has not inadvertently infringed third party copyrights.

 

Data governance and privacy assessments must evaluate how the target protects personal and confidential information: This includes scrutinizing internal policies on employee use of third party AI tools, which can inadvertently expose trade secrets.

 

Algorithmic integrity and bias reviews are essential, particularly in regulated sectors like FinTech and health: Ontario courts and tribunals are actively addressing AI driven discrimination. In Henderson v. McMaster University, 2025 HRTO 764, the Human Rights Tribunal of Ontario considered allegations of discrimination related to the university’s use of AI enabled examination proctoring software. While the application was dismissed on jurisdictional grounds, the case underscores that AI systems in educational and employment contexts are subject to full scrutiny under the Human Rights Code. If your target uses AI for hiring, performance evaluation or customer facing decisions, you must audit for bias that could trigger human rights liability.

 

Supply chain and third party risk mapping is non-negotiable: Many AI companies rely on a complex web of third party models, APIs and cloud infrastructure. You need to understand every dependency and ensure that licenses permit the intended post closing use.

 

Security and model integrity assessments must go beyond standard cybersecurity: Threats like data poisoning, where attackers corrupt training data, and model inversion, where sensitive training data is extracted from the model, require specialized technical review. The Alberta Court of King’s Bench in Clearview AI Inc v. Alberta (Information and Privacy Commissioner), 2025 ABKB 2875, addressed the legality of scraping images from the internet to build a facial recognition database, with the court considering whether the privacy commissioner could order the company to cease collecting and delete images of Albertans. This case demonstrates that aggressive data collection practices can attract regulatory orders with significant operational consequences.

 

Finally, leverage AI specific subject matter experts. No generalist lawyer or accountant can properly assess model architecture or training data provenance. Your diligence team must include technical specialists who understand both the code and the legal framework that governs it.

What Data Security Risks Should You Watch For In AI Driven M&A Deals?

Attackers now use GenAI to automate reconnaissance and launch sophisticated phishing campaigns: Generative AI has fundamentally changed the threat landscape. Attackers can now automate reconnaissance, craft highly convincing phishing emails, and identify vulnerabilities at scale. For a Toronto based target company, this means the risk of a pre closing data breach is higher than ever. Standard cybersecurity questionnaires may not capture whether the target has faced AI powered intrusion attempts.

 

Compromised codebases and embedded threat actors require forensic code reviews: Hidden risks can lurk deep within a target’s codebase, including deliberately embedded backdoors or compromised open source libraries. The Ontario Superior Court case Arc Compute v. Anton Allen highlights the vulnerability of AI development environments to intellectual property theft, underscoring the need for forensic code reviews as part of your due diligence. You cannot rely on the target’s self assessments alone.

 

Trade secrets may have been compromised by training third party AI models: One of the most significant and potentially irreversible risks is that a target’s employees may have fed proprietary data or trade secrets into public third party AI tools like ChatGPT or GitHub Copilot. Once data is used to train a third party model, you lose control over it permanently. The Federal Court in Canada (Information Commissioner) v. Canada (Minister of National Defence), 2024 FC 315, addressed the tension between transparency and proprietary interests in the context of AI systems used by the government, reinforcing that once information enters an AI system, asserting control becomes exceptionally difficult. For your deal, this means you must audit not only the target’s internal data security but also its employee policies regarding the use of external AI tools.

 

Assemble a dedicated cybersecurity tiger team and conduct pre-signing assessments: Your M&A playbook must include a dedicated cybersecurity “tiger team” that begins work before you sign any binding agreement. This team should conduct targeted pre-signing assessments, including forensic analysis of the target’s codebase, review of third party AI usage logs, and stress testing of security controls. The tragic data breach examined in LifeLabs LP v. Information and Privacy Commissioner (Ontario), 2024 ONSC 2194, where the personal health information of millions of Ontarians was exposed, demonstrates how a single security failure can lead to years of litigation and regulatory scrutiny. Finally, your obligations do not end at closing. Continuous monitoring post close is essential, as new vulnerabilities and regulatory requirements will emerge as the AI landscape evolves.

What Global Regulatory Hurdles Must You Clear In Cross Border AI M&A Deals?

A complex web of overlapping laws now governs AI systems worldwide: For any cross border acquisition, you cannot rely solely on Canadian rules. The European Union AI Act, which took effect in August 2024, introduces a four tier risk framework. Unacceptable risk practices, including manipulative AI techniques, social scoring and untargeted facial data scraping, are now banned under Article 5 of the EU AI Act as of February 2, 2025. Simultaneously, United States state laws are rapidly emerging. Utah enacted three AI bills effective May 7, 2025, imposing disclosure requirements. Colorado’s AI Act takes effect June 30, 2026, while Connecticut’s amendments to its data privacy law follow on July 1, 2026. For a Canadian buyer acquiring a target with US or European operations, these obligations apply directly to the acquired entity post closing.

 

Antitrust and competition scrutiny of AI deals is intensifying significantly: The Competition Bureau has identified AI as an enforcement priority for the first time in its 2025 2026 annual plan, both in terms of industry oversight and internal capacity building. Algorithmic pricing remains under close watch. In November 2025, the Bureau discontinued its civil investigation into RealPage and Yardi’s algorithmic pricing software, finding the products were not yet sufficiently widespread to substantially harm competition. However, the Bureau simultaneously issued guidance reinforcing that algorithmic pricing tools can pose significant risks to competition. More fundamentally, Bill C 59’s amendments to the Competition Act came into force on June 20, 2025, lowering the threshold for private parties to bring direct actions before the Competition Tribunal. This means competitors, customers and even consumers can now challenge allegedly anti competitive conduct, including data concentration and algorithmic coordination, without waiting for the Bureau to act.

 

Structuring your deal requires mapping data flows and conducting gap analyses: Ontario courts have made clear that data mishandling carries serious consequences. In Quantz v. Ontario, 2025 ONSC 90, a proposed class action arose from an alleged data breach by the Ministry of Children, Community and Social Services involving 45,000 ODSP clients’ personal information. The court addressed the tort of intrusion upon seclusion in the data breach context, confirming that privacy breaches affecting vulnerable populations attract heightened scrutiny. For your cross border deal, this means mapping every cross border data flow, identifying which personal information will transfer across jurisdictions, and conducting a jurisdiction by jurisdiction gap analysis. Consider using separate legal entities for different geographic markets to ring fence compliance obligations, and ensure your purchase agreement includes specific representations covering EU AI Act risk categorization, US state law disclosure obligations and Canadian privacy requirements under PIPEDA.

How Should You Structure M&A Deals To Mitigate AI And Data Risks?

Earnouts have nearly doubled in Canadian private M&A since 2016 to bridge valuation gaps in AI deals: The uncertain and rapidly evolving nature of AI technology means that what appears valuable during due diligence may not deliver the same results post closing. Earnouts allow you to tie a portion of the purchase price to specific future milestones, such as revenue targets, deployment benchmarks or compute efficiency goals. For example, a Toronto based acquirer could structure an earnout requiring the target’s AI model to achieve certain accuracy rates or customer adoption levels before additional consideration is paid. The Ontario Superior Court’s recent decision in Project Freeway Inc. v. ABC Technologies Inc., 2025 ONSC 1048, confirmed that earnout acceleration clauses must be carefully drafted to balance the seller’s right to potential payments against the buyer’s operational freedom to run the target business. This case underscores that ambiguous earnout drafting can create significant post closing disputes.

 

Tailored representations and warranties must cover data provenance, model performance, algorithmic bias and regulatory compliance: Standard representations are insufficient for AI deals. You need specific warranties that the target owns all data used to train its models, has obtained all required consents, and has not used generative AI tools in a manner that compromises intellectual property rights. Additionally, representations should address whether the target has complied with emerging AI regulations, including the EU AI Act’s risk classifications and applicable Canadian privacy laws under PIPEDA. Warranty periods for AI specific risks should be longer than traditional deals, often three to five years, to account for the delayed discovery of model vulnerabilities or compliance failures.

 

Privacy covenants formalize BTE undertakings while escrow arrangements protect against high risk data assets: The Business Transaction Exemption under PIPEDA requires parties to enter into binding agreements governing the use and protection of personal information during due diligence. A properly drafted privacy covenant should be placed in the pre transaction non disclosure agreement, not the purchase agreement itself, because the personal information has already been exchanged by the time parties sign the definitive agreement. This covenant must include pre closing undertakings to use information solely for transaction purposes, protect it with appropriate security measures, and destroy or return it if the deal does not proceed. Post closing, the covenant must address continued protection, data subject withdrawal rights, and notification obligations. For high risk data assets, such as large scale customer databases or sensitive health information, escrow arrangements provide an additional layer of protection. A portion of the purchase price can be held back to cover potential privacy breach claims or regulatory penalties that may arise post closing.

 

Securing data rights post closing requires more than confirming ownership, as control over AI assets depends on ongoing access and adaptation rights: Ownership of an AI model without the right to retrain it using fresh data, or without access to the underlying training datasets, leaves you with a static asset that will rapidly depreciate. The Ontario Superior Court’s decision in Arc Compute v. Anton Allen highlighted how AI development environments remain vulnerable to intellectual property theft, reinforcing the need for contractual protections that extend beyond mere title transfer. You must ensure that all data licenses are assignable post-closing, that third party API dependencies are transferable, and that the target’s key data scientists and engineers are retained through employment agreements with robust non-competition and non-solicitation clauses. Without securing these rights, the AI capability you paid for may quickly lose its competitive edge in Ontario’s fast moving technology market.

What Are The Most Common Areas Of Risk When Acquiring An AI Company?

IP contamination can permanently destroy trade secrets through third party AI training: When employees feed proprietary code or confidential data into public generative AI tools like ChatGPT or GitHub Copilot, that information may become part of the model’s training set. Once exposed, your trade secrets are effectively public. The Federal Court in Canada (Information Commissioner) v. Canada (Minister of National Defence), 2024 FC 315, reinforced that once information enters an AI system, asserting control becomes exceptionally difficult. To mitigate, audit employee AI usage policies and require contractual prohibitions on feeding sensitive data into external models.

 

Data privacy non compliance with PIPEDA or GDPR creates significant liability: Many AI companies collect personal information without proper consent or fail to meet transparency obligations. The McMaster University (Re), 2024 CanLII 17583 (ON IPC) case showed how AI proctoring software violated privacy rules by using student data for system improvement without consent. Mitigation includes specific privacy representations, escrow holdbacks, and pre closing privacy audits.

 

Model hallucination and liability expose you to legal claims from inaccurate or harmful outputs: An AI model that generates false medical advice or discriminatory lending decisions can trigger negligence or human rights complaints. The Henderson v. McMaster University, 2025 HRTO 76 case illustrates the scrutiny applied to AI systems in educational contexts. Use tailored warranties and maintain human oversight protocols.

 

Regulatory ambiguity from the lack of a finalized federal AI law (AIDA) creates uncertainty: Without clear rules, you cannot predict future compliance obligations. Structure deals with earnouts and flexible compliance covenants that can adapt to new regulations.

How Can You Secure AI Value After The Deal Closes?

Post closing risks can emerge when AI is deployed in new contexts not examined during due diligence: An AI system that performs safely in the target’s limited environment may behave unpredictably when integrated into your larger customer base or combined with your existing datasets. For example, a Toronto based acquirer that rolls out an acquired AI hiring tool across its national operations may discover only after closing that the model systematically screens out candidates from certain postal codes, triggering human rights complaints. The Ontario Superior Court’s guidance in Project Freeway Inc. v. ABC Technologies Inc., 2025 ONSC 1048, regarding earnout acceleration clauses reminds executives that integration phase disputes often turn on ambiguous contractual terms. To manage this risk, conduct a post closing validation period where the AI system operates in a controlled sandbox before full deployment.

 

Preserving AI value requires active strategies for retaining talent and integrating data governance: The data scientists and engineers who built and maintain the AI model are often more valuable than the code itself. Without them, you cannot retrain, update or troubleshoot the system. Secure these individuals with employment agreements containing robust non competition and non solicitation clauses, ideally signed before closing. Additionally, integrate disparate data governance frameworks. Your target may have used different consent models, retention schedules or security protocols than your own organization. Harmonize these frameworks within the first 90 days post closing to avoid compliance gaps. Ensure that data licenses, API access rights and cloud infrastructure contracts are formally assigned to you and that all necessary consents for continued data processing are obtained.

 

Continuous monitoring through ongoing AI system auditing and governance reviews must become standard practice: AI systems degrade over time as data distributions shift and regulatory requirements evolve. Implement quarterly audits of model outputs for bias, accuracy and security vulnerabilities. The LifeLabs LP v. Information and Privacy Commissioner (Ontario),2024 ONSC 2194 case, arising from a massive data breach affecting millions of Ontarians, demonstrates that post-closing monitoring failures can lead to years of litigation and regulatory penalties. Establish an internal AI governance committee with representation from legal, compliance, IT and business units. This committee should review new use cases, approve changes to training data, and track emerging regulations like the EU AI Act or future Canadian federal legislation. By embedding continuous monitoring into your post merger integration plan, you protect the value of your AI investment and reduce exposure to avoidable legal and reputational harm.

Conclusion:

Artificial intelligence has fundamentally changed the M&A landscape. For Ontario business leaders, this shift is not coming. It is already here. Traditional due diligence frameworks will not catch a biased hiring algorithm, a model trained on leaked customer data, or trade secrets fed into a public AI chatbot. Each of these risks can erode valuation and trigger regulatory scrutiny long after closing.

 

The path forward is clear. Conduct AI specific due diligence that examines data provenance, model integrity, and third party dependencies. Structure your deals using earnouts, tailored warranties, and escrows to bridge valuation gaps and protect against hidden liabilities. Secure not just ownership but ongoing control over data rights and key talent. And commit to continuous post closing monitoring as regulatory frameworks like the EU AI Act and future Canadian legislation take shape.

 

AI driven M&A rewards the prepared. By adopting this playbook, you can navigate the grey zones, mitigate the risks, and capture the full value of your next transaction in Ontario’s dynamic technology market.

FAQs:

What is AI due diligence and how is it different from standard M&A due diligence?

AI due diligence goes beyond traditional financial and legal checks to examine data provenance, model integrity, algorithmic bias, and third party AI dependencies. Standard diligence won’t catch a model secretly trained on leaked customer data or an algorithm that discriminates against protected groups. You need technical specialists who understand both code and the legal framework.

How can I value an AI company when future performance is uncertain?

Traditional valuation models often fail with AI companies because their value lies in dynamic assets like data rights and specialized talent. Earnouts are increasingly used to bridge valuation gaps. Common structures tie additional consideration to defined performance benchmarks such as revenue thresholds, deployment milestones, or compute efficiency goals. This ensures the ultimate price aligns with actual post closing performance.

What data privacy laws apply to AI companies in cross border M&A deals?

Canadian buyers face a patchwork of global requirements. In Canada, PIPEDA governs personal information handling. For deals involving European operations, the EU AI Act’s four tier risk framework applies, with unacceptable risk practices banned since February 2025. Several US states, including Utah, Colorado and Connecticut, have also enacted AI specific laws. Your due diligence must map every cross border data flow and conduct jurisdiction by jurisdiction gap analyses.

Can antitrust regulators block an AI acquisition?

Yes, and scrutiny is intensifying. The Competition Bureau has identified AI as an enforcement priority for the first time in its 2025-2026 annual plan. Algorithmic pricing and data concentration are under close watch. Additionally, recent amendments to the Competition Act allow private parties to bring direct actions before the Tribunal without waiting for the Bureau to act, increasing litigation risk for AI deals.

What specific representations and warranties should I include in an AI acquisition agreement?

Standard reps are insufficient. Your agreement should include warranties covering data provenance, model performance, algorithmic bias, regulatory compliance, and the target’s use of third party AI tools. Also require confirmation that no trade secrets were fed into public AI models like ChatGPT. Warranty periods for AI specific risks should be longer than traditional deals, typically three to five years, to account for delayed discovery of model vulnerabilities.

How do I preserve AI value after the deal closes?

Post closing risks emerge when AI systems are deployed in new contexts not examined during due diligence. Preserve value by securing key talent through employment agreements with robust non competition clauses, integrating disparate data governance frameworks within the first 90 days, and implementing continuous monitoring through quarterly audits of model outputs for bias, accuracy and security vulnerabilities.

References:

 Arc Compute v. Anton Allen, Michael Buchel et al., 2025 ONSC 1745 (CanLII), <https://canlii.ca/t/kb4jl>, retrieved on 2026-04-22.

 Del Giudice v. Thompson, 2024 ONCA 70 (CanLII), <https://canlii.ca/t/k2kcr>, retrieved on 2026-04-22. 

 LifeLabs LP v. Information and Privacy Commr. (Ontario), 2022 ONSC 5751 (CanLII), <https://canlii.ca/t/js9xm>, retrieved on 2026-04-22.

 Henderson v. McMaster University, 2025 HRTO 76 (CanLII), <https://canlii.ca/t/k8xgl>, retrieved on 2026-04-22.

 Clearview AI Inc v Alberta (Information and Privacy Commissioner), 2025 ABKB 287 (CanLII), <https://canlii.ca/t/kc1r5>, retrieved on 2026-04-22.

 Canada (Information Commissioner) v. Canada (Minister of National Defence), 2011 SCC 25 (CanLII), [2011] 2 SCR 306, <https://canlii.ca/t/fld60>, retrieved on 2026-04-22.

 LifeLabs LP v. Information and Privacy Commissioner of Ontario, 2023 ONSC 585 (CanLII), <https://canlii.ca/t/jv1zn>, retrieved on 2026-04-22.

 Quantz v. Ontario, 2025 ONSC 90 (CanLII), <https://canlii.ca/t/k8mtn>, retrieved on 2026-04-22.

 Project Freeway Inc v. ABC Technologies Inc., 2025 ONSC 1048 (CanLII), <https://canlii.ca/t/k9rkg>, retrieved on 2026-04-22.

 Daley v. McMaster University, 2024 HRTO 1419 (CanLII), <https://canlii.ca/t/k7g2n>, retrieved on 2026-04-22.

Share This Post
Scroll to Top