E-BOOK
Legal technology and AI in Canada
AI systems are transforming the nature of legal work in Canada. Legal professionals stand to gain from this technology on several fronts, whether in terms of improved workflow efficiency or devoting more work hours to value-added client services. But they also need to be aware of the many ethical obligations and regulations concerning the use of AI in the Canadian legal environment.
Lawyers must navigate various AI applications and know the ethical and professional implications. They need to be familiar with the complexities of using AI technology while staying compliant with regulations, from the provincial to the federal level. Of course, they should aim to reap the substantial benefits AI technology offers.
One option that’s no longer on the table is ignoring AI or putting it off. This technology has already fundamentally changed how lawyers draft documents and contracts, conduct research, and communicate with clients. Furthermore, clients are beginning to expect this.
AI in the Canadian legal profession
Within three years, AI has gone from being regarded as an intriguing concept to becoming an essential tool for many Canadian legal professionals.
In a recent Canadian Legal Market survey, 89% of respondents said their firm had either begun piloting AI for research and document review tasks, or had fully integrated AI tools across various practice areas. By contrast, zero respondents said that their firm isn’t pursuing AI in some form — 8% said their firms were still in the exploratory stage.
At the same time, 74% of respondents said they were slightly concerned about AI-related risks, while 14% said they were “very concerned.” When asked what they considered to be the greatest challenges related to AI, about 42% of respondents said AI integration and regulation are the most significant issues facing the Canadian legal profession at present.
The potential of AI in the legal space
What’s driving this growth? For one thing, AI has a demonstrable ability to spur efficiency and quality improvements at a legal firm. AI technology bolsters legal operations in many ways, including:
Summarizing. AI tools convert piles of documents into concise summaries, enabling lawyers to quickly grasp and assimilate vital information they need for a case.
Drafting. AI offers the means to quickly assemble a first draft, such as providing an array of templates and step-by-step guidance as to how to phrase and craft contracts, agreements, summaries, and other essential tasks.
Research. Natural language processing makes legal research less time-consuming while also more nuanced. AI systems can identify patterns that might escape a manual review, or find a key piece of information buried in a single footnote.
Predictive analytics. By running multiple analyses of historical case data, AI systems can project possible outcomes for lawyers, such as the statistical likelihood of success should a lawyer advance a particular argument in court, for example.
Compliance and risk management. AI systems can help a lawyer to monitor regulatory changes in real time, thus enabling them to proactively remain in compliance.
Ethical considerations in AI for legal professionals: The importance of professional-grade AI
At the same time, using AI, particularly a consumer-grade model like ChatGPT, could expose a legal professional to a range of potential ethical violations.
Canadian courts are responding forcefully to AI-generated errors in legal documents, for example. A notable case is Zhang v. Chen, a 2024 Supreme Court of British Columbia decision. Here, a lawyer filed a notice of application which included a listing of case law that ChatGPT had “hallucinated.” Upon discovering this, the lawyer acknowledged the errors and claimed they didn’t know ChatGPT had fabricated the information.
In his decision, Justice David Masuhara sanctioned the lawyer and said that lawyers always need to inform the court and opposing counsel whenever they submit any filed documents with AI-generated content. “Generative AI is still no substitute for the professional expertise that the justice system requires of lawyers,” Masuhara wrote. “Competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less.”
Professionalism is key
Legal firms need to know that AI systems are far from being created equal. Relying on something like ChatGPT for sensitive legal work is taking a big chance. Unless you invest in a professional-grade legal AI solution, you run the risk of violating professional ethics or landing in trouble with the courts.
When a lawyer needs to employ the correct case citation, precisely define terms in a contract, or confidently rebut an opposing counsel’s claim, they require a platform assembled with the demands of legal professionals in mind. It should be backed by secure, up-to-date, and fully verified data, not one running on data scraped from the internet.
Questions that a legal firm should ask before its professionals employ an AI platform include:
How do you ensure the accuracy and reliability of AI-generated content? As seen in Zhang v. Chen, including AI-falsified citations in a filing spells disaster for a lawyer’s case, and opens them up to fines. Your AI provider should tell you what type of proprietary legal database the system has, how often this database is updated and verified, and what measures the AI provider takes to guard against any “hallucinated” outputs.
How does the system check for potential biases in AI outputs, and what mitigation strategies are available? An AI system that produces content inadvertently shaped by absorbing cultural biases and prejudices could run your firm afoul of legal ethical obligations and regulations.
How does the system keep in compliance with ethical rules concerning client confidentiality, such as data protection requirements? Client confidentiality can be endangered when using a public large language model (LLM). A judge may consider it to be the rough equivalent of a lawyer having a conversation on a phone in a public space and disclosing vital client information.
Are there resources and infrastructure to implement and manage AI solutions effectively? What are the system’s training methods and liability considerations, particularly when concerned with maintaining Canadian legal standards? What type of training does the AI provider offer? How much time and support will be required to make integration run smoothly and efficiently?
Canadian regulatory and ethical guidelines
Over the past two years, many Canadian legal institutions, from the provincial and federal courts to various legal advisory boards, have published statements concerning the use of AI in legal work.
The Canadian Bar Association's position on AI
The Canadian Bar Association (CBA) has drafted guidelines and mandates concerning AI use in legal work. These include:
Competence. The duty of competent representation, as required by Rule 3.1 of the CBA’s Model Code, requires lawyers to be cognizant of risks associated with the use of “innovative technologies.” This includes verifying all content created by AI for accuracy and relevancy.
Confidentiality. When a lawyer uses AI to create new content, they may expose confidential information, particularly when using an open system such as ChatGPT. If an AI system retains input data and repurposes it without permission — such as using it for training purposes — this breaches a lawyer’s duty of confidentiality to their client.
Supervision or delegation. As per the CBA, AI should be used for specific and limited tasks, not relied upon for complex reasoning and offering legal advice. This definition is in the context of Rule 6.1-3 of the Model Code, which specifically states that a lawyer must not permit a “non-lawyer” to give legal advice. “Lawyers should guard against misplaced over-reliance on generative AI tools as they may compromise or even prevent independent legal judgment.”
Communication. The CBA recommends that lawyers inform clients if and when they use GenAI, such as research, analysis, document review, or trial preparation.
In this case, many Canadian legal firms currently stand at odds with CBA recommendations. As per a recent survey of Canadian legal professionals, most firms don’t ask permission from clients about their AI usage. Only 27% of large firms, 29% of midsize firms, and 18% of small firm respondents surveyed said they get client consent for their AI usage. Among all firms surveyed, only 11% said they actively communicate their use of AI, 17% tell clients only if they inquire, and 7% said they never disclose AI usage.
Effects on fees and disbursements. Not only should lawyers inform clients when they use AI, the CBA further recommends that legal professionals detail how this usage could impact the processing of legal matters, such reducing hours spent on contract review or document drafting. “Should the use of generative AI reduce a lawyer’s time or costs, those savings should be accurately reflected in any billings to the client,” the CBA says.
Potential for discrimination and bias. Lawyers are prohibited from discriminating against clients or other people. Yet “discriminatory input data is one of the main sources of discrimination by AI systems,” the CBA notes. So, lawyers should be aware that potential biases could be in their AI system and regularly run audits to detect biases, or else they risk violating professional anti-discrimination standards.
Provincial law societies weigh in
Beyond the CBA, many Canadian provincial legal societies have published their own AI guidelines.
For example, the Manitoba Law Society warns lawyers that they “must apply their independent and trained judgment when acting for clients. Professional judgment cannot be delegated to generative AI and remains your responsibility at all times.”
Maintaining the duty of confidentiality is a common cause of concern. The Law Society of Ontario’s Rules of Professional Conduct mandates lawyers to maintain “strict confidence” for all client information. And as per the Law Society of Alberta’s AI Playbook, “the risk of inadvertent disclosure of confidential client or proprietary information cannot be overstated.”
Again, using a professional-grade legal AI system is essential to conform with such rules. Because public AI platforms aren’t designed with legal confidentiality standards, they could potentially expose confidential queries to third parties.
The British Columbia Law Society’s guidance recommends that legal firms get client consent before using AI. “Ideally, client confidential information, including any information identifying the client, would be omitted from anything…supplied to the generative AI tool to maintain client confidentiality. If redacting the data is not possible, then you could explore whether client consent to use the tool with such information is viable. Any consent obtained from the client must be fully informed and voluntary consent after disclosure in writing or orally with a written record of the communication.”
The courts take action
In another recent case involving a provincial court and AI usage, Ko v. Li, an Ontario Superior Court of Justice decision, entailed a counsel, in oral submissions, referring to two cases to support her client’s argument. However, these cases could not be located in the Canadian public database CanLII, nor in Westlaw or Google; they turned out to have been apparently falsified by the counsel’s use of AI.
The Court, in its ruling, said that a “litigation lawyer’s most fundamental duty is not to mislead the court” and that “it should go without saying that it is the lawyer’s duty to read cases before submitting them to a court as precedential authorities. At its barest minimum, it is the lawyer’s duty not to submit case authorities that do not exist or that stand for the opposite of the lawyer’s submission.”
The threat of similar incidents is pushing other provincial courts to make statements on GenAI use. For example, a joint committee of the Chief Justices of the Alberta Court of Justice, Court of King's Bench of Alberta, and Court of Appeal of Alberta collectively issued a Generative AI playbook, “Notice to the Public and Legal Profession Ensuring the Integrity of Court Submissions when Using Large Language Models,” which urges litigants to exercise caution when referencing cases or analysis derived from LLMs. The Notice requires litigants to rely “exclusively on authoritative sources when referring to case law, statutes, or commentary in making representations to the courts.” Further, the Notice requires a “human in the loop” — that any AI-generated submissions must be verified by a legal professional.
The Superior Court of Quebec issued a notice similar to that of Alberta, as did The Provincial Court of Nova Scotia, whose guidance stresses that any party using AI-generated material should make that clear to all concerned parties, and that the Court expects all written and oral submissions referencing case law, statutes, or legal commentary to be limited to accredited and established legal databases.
Additionally, the Federal Court of Canada also requires litigants to disclose in writing if they’ve used AI to generate content in court filings, with the disclosure required to appear in the opening paragraph of any relevant document. Further, in a notice titled “Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence,” the Court said its own officers and judges would not use AI when making judgments and orders without first engaging “in public consultation”.
Expected efficiency gains and cost savings
One reason why it’s so vital for Canadian legal firms to keep in compliance with these rulings and mandates is the vast potential that AI offers in terms of time, efficiency, and financial savings.
Thomson Reuters recent Future of Professionals Report 2025 found that more than half (53%) of professionals surveyed said their organizations have seen ROI directly or indirectly tied to their AI investment. This ROI is found in greater productivity and efficiency levels, along with lower error rates and heightened response time to client queries, for example.
Further, survey respondents predicted that AI will save them five hours weekly or about 240 hours in the next year. That’s up from 200 hours in the 2024 Future of Professionals survey, and means AI use will create an average annual value of $26,200 per Canadian legal professional.
Integration of AI into legal workflows
Once a law firm commits to using AI, it needs to ensure that the technology actually gets used by its lawyers. After all, it’s far from ideal if AI winds up being an ignored icon on their desktop.
One way to encourage usage? Having a smooth integration of AI functions into the law firm’s existing technology. You don’t want lawyers to regard AI as a standalone system that becomes a confusing addition to their workload. The more that AI prompts are nested within a legal professional’s sidebar, and become as intuitive to use as spellcheck, the more that AI will be part of a lawyer’s daily work routine.
That’s why it’s important for managing partners and other higher-ups at a law firm to be involved in integration, “to attend the initial meeting, to hear the conversations about strategic goals, and to learn which use cases and workflows will get their firm those quick wins,” says Thomson Reuters AI customer success manager Alkmini O’Brien. “They’ll get an understanding of what onboarding will look like and walk away with an agreed-upon joint action plan.”
As many Canadian law firms have legacy tech infrastructures that were built piece by piece, sometimes for much longer than a decade, each law firm’s IT staff need to be fully involved in selecting and experimenting with an AI system. Their input will help ensure that the AI system works and can be seamlessly integrated within the firm’s existing operations.
Start your professional-grade AI journey
AI is reshaping the Canadian legal landscape. As this technology continues to evolve into more sophisticated and intuitive ways, with the promise of agentic AI now on the horizon, the very nature of relationships between lawyer and client, managing partners and junior lawyers, and competing law firms all stand to change.
Canadian law firms need to stay ahead of the curve. The more proactive that a law firm is about employing a professional-grade legal AI system, the better positioned it is to handle the challenges of the rest of the decade and beyond. Explore how Thomson Reuters professional‑grade legal AI solutions are helping Canadian law firms prepare for what’s next.
Professional-grade legal AI
Thomson Reuters Legal Solutions Canada
Get trusted answers faster, increase productivity, and gain a competitive advantage with technology, content, and expert guidance