AI is already part of legal work.
Lawyers are using tools like ChatGPT, Gemini, Copilot, and legal-specific AI platforms to draft first versions of documents, summarize records, organize facts, prepare client communications, review discovery, and improve internal workflows.
That does not mean AI is replacing lawyers.
It means lawyers now need to understand how to use these tools safely.
Using AI is not the risk. Using AI output without checking it -- that is the risk. Courts have already sanctioned attorneys for filing documents that included fabricated cases, false quotations, and legal citations nobody bothered to verify. Bar associations are making the same point: AI can assist with legal work, but it does not erase a lawyer's professional duties. (Reuters)
For law firms, the question is no longer whether AI will be used. It already is.
What matters now is whether your firm has clear rules before someone makes an expensive mistake.
The Real Problem: AI Can Sound Correct Even When It Is Wrong
Generative AI tools are very good at producing polished, confident writing.
Useful for first drafts, summaries, outlines, checklists, and internal brainstorming. Dangerous when the tool is asked to provide case law, quote legal authority, or support a legal argument without human review.
AI tools can produce "hallucinations." In legal work, that can mean:
- Fake case names
- Incorrect citations
- False quotations
- Misstated holdings
- Outdated law
- Unsupported legal arguments
- Confident summaries that miss key facts
And the output looks professional even when it is dead wrong.
Lawyers cannot rely on AI-generated legal research without independently verifying it. In Noland v. Land of the Free, L.P., the California Court of Appeal sanctioned counsel $10,000 after appellate briefs included fabricated AI-generated citations. (Justia)
Simple rule: if a lawyer signs it, files it, sends it, or relies on it, the lawyer owns it.
Courts Are Already Sanctioning Lawyers for AI Mistakes
Recent cases show how quickly AI misuse can turn into a professional responsibility issue.
In Illinois, a Cook County judge sanctioned a lawyer and law firm nearly $60,000 after a court filing included a fake case generated by ChatGPT. The sanction was split between a $49,500 sanction against the firm and a $10,000 sanction against the attorney. (Chicago Sun-Times)
In Kansas, a federal judge fined five lawyers a total of $12,000 after filings in a patent case included AI-generated false quotations and citations. The court emphasized that the attorneys who signed the filings had a responsibility to verify them, even though only one lawyer used ChatGPT to prepare the material. (Reuters)
In Texas, attorney Lawrence Chang was ordered to pay $2,000 in attorneys' fees and expenses and write a letter explaining where his AI use went wrong and how he would prevent the issue in the future. (Bloomberg Law)
In California, the appellate court in Noland imposed a $10,000 sanction for fabricated AI-generated citations. (Justia)
And this problem is still growing. Public tracking efforts now list many legal matters involving alleged or confirmed AI-generated hallucinations in court filings. Damien Charlotin's AI Hallucination Cases Database tracks matters where courts or tribunals addressed AI hallucinations in more than a passing way. (Damien Charlotin)
Same pattern every time.
Nobody got sanctioned for using AI. They got sanctioned for not checking the output before filing it.
Is Your Firm Using AI Safely?
We help law firms build AI workflows with human review, confidentiality controls, and compliance built in from day one.
Book a Free ConsultationThe ABA's Guidance: Existing Ethics Rules Still Apply
ABA Formal Opinion 512 dropped July 29, 2024. It does not invent new ethics rules for AI. It just explains how the duties you already have apply when you use generative AI tools. (Reuters)
Here is what it covers:
Competence
Lawyers must understand the benefits and risks of the technology they use. That does not mean every lawyer has to become a programmer. But it does mean lawyers should understand that generative AI can make mistakes, fabricate sources, mishandle confidential information, or produce incomplete work.
Confidentiality
Lawyers must protect client information. Before entering client facts, documents, names, financial details, medical records, discovery, or privileged communications into an AI tool, a firm should know how that tool stores, uses, and protects the data.
Communication
In some situations, lawyers may need to tell clients that AI is being used. Especially when AI materially affects the representation, the cost, the confidentiality risk, or how the client makes decisions about their case.
Candor to the Court
A lawyer cannot file false information with a court. That includes fake citations, inaccurate quotations, and legal arguments based on authorities the lawyer has not personally reviewed.
Supervision
Partners and supervising attorneys must make sure associates, paralegals, assistants, contractors, and vendors use AI in a way that complies with the lawyer's professional obligations.
In plain English: AI can assist the lawyer, but it cannot become the lawyer.
State Bars Are Moving in the Same Direction
State bars are also issuing guidance.
The State Bar of California has published practical guidance on the use of generative AI in legal practice. That guidance addresses issues such as competence, confidentiality, communication, candor, supervision, billing, and discrimination. It also warns lawyers to review AI outputs carefully and avoid entering confidential information into tools without proper safeguards. (The State Bar of California)
The New York State Bar Association released a report and recommendations on AI and the legal profession in 2024. Reuters reported that the NYSBA guidance encouraged lawyers to protect client confidentiality, avoid complete reliance on AI-generated information, consider client disclosure, and continue AI education. (New York State Bar Association)
Maryland guidance has also stressed the need for human fact-checking, confidentiality protection, and lawyer responsibility when using generative AI. (Maryland Daily Record)
Direction is consistent across jurisdictions: use AI if you want, but you own the result.
Practical AI Rules Every Law Firm Should Adopt
A law firm does not need a complicated AI policy to start.
But it does need a written policy.
At minimum, every law firm should have clear rules in these areas.
1. Decide Which AI Tools Are Approved
Do not leave every employee to pick whatever AI tool they find on Google.
Decide which tools are approved, which are allowed only for non-confidential work, and which are off-limits entirely.
Before approving any tool, ask:
- Does the tool store user prompts?
- Does the tool use user inputs for training?
- Can firm data be reviewed by humans?
- Is there an enterprise or legal-specific privacy setting?
- Can confidential information be entered safely?
- Does the tool provide audit logs?
- Can the firm control user access?
- What happens to uploaded documents?
Legal-specific AI platforms may offer stronger privacy, security, or workflow controls than consumer tools, but firms still need to review the actual vendor terms. Do not assume a tool is safe just because it is marketed to lawyers.
2. Ban Unverified AI Citations
This should be a bright-line rule.
No AI-generated case citation, statute, quotation, or legal proposition should be used unless a lawyer verifies it against a reliable source.
That means the lawyer or assigned reviewer must actually check the authority.
Asking the AI whether the case is real does not count.
Asking the AI to "double-check" does not count.
A clean-looking citation format does not mean the case exists.
If the authority is going into a court filing, demand letter, opinion letter, settlement analysis, or client advice, it needs human verification.
3. Keep Confidential Client Information Out of Public AI Tools
Unless the firm has confirmed that an AI tool is approved for confidential work, staff should not enter client information into it.
That includes:
- Client names
- Case facts
- Medical records
- Financial information
- Social Security numbers
- Discovery materials
- Settlement communications
- Privileged communications
- Internal strategy notes
- Court documents that are not public
- Personally identifying information
A safer rule is this:
Use public AI tools only with general, anonymized, or fictional information.
If the firm wants to use AI with real client files, it should use approved tools with reviewed privacy terms, access controls, and security protections.
4. Use AI for Drafting, Not Final Judgment
AI is useful for creating a starting point.
It can help draft:
- Client emails
- Intake scripts
- Internal checklists
- Discovery summaries
- Timeline summaries
- Deposition outlines
- Marketing content
- Workflow documentation
- First drafts of non-final documents
But the final judgment must come from a lawyer.
A lawyer decides what is accurate, what is relevant, what is privileged, what is strategic, and what gets cut.
AI can help organize legal work.
It cannot exercise legal judgment.
5. Train the Entire Team
AI risk is not limited to attorneys.
Paralegals, intake staff, marketing staff, legal assistants, contract workers, and outside vendors may all use AI in ways that affect the firm.
Training should cover:
- Which tools are approved
- What information cannot be entered
- How to spot hallucinations
- When attorney review is required
- How to verify citations
- How to handle confidential information
- How to report a mistake
- What uses are prohibited
Do not assume staff understand the risks. Most people see polished AI output and assume it must be accurate. That assumption has cost firms six figures in sanctions already.
6. Create a Review Process for Court Filings
Any filing that involved AI should go through a clear review process before submission.
Before hitting file, the responsible attorney should confirm:
- Every case exists
- Every citation is accurate
- Every quoted passage is correct
- Every legal proposition is supported
- Every statute or rule is current
- No confidential information was improperly entered into an AI tool
- The final document reflects the attorney's own judgment
Build this into the filing checklist. Not as an informal "someone should probably look at this" step. As an actual gate that stops a document from going to the court without sign-off.
Need Help Building Your Firm's AI Policy?
Iron Noodle builds AI-powered systems for law firms with compliance, confidentiality, and human review built into every workflow.
Talk to Our TeamShould Lawyers Disclose AI Use to Clients?
Sometimes, yes.
Not every use of AI requires a separate client disclosure. For example, using AI to clean up an internal checklist or summarize non-confidential administrative notes may not require a specific conversation with the client.
But disclosure may be appropriate when AI use could materially affect:
- The scope of representation
- The cost of the work
- The confidentiality risk
- The method of providing legal services
- The client's decision-making
- The client's expectations
A practical approach is to include AI language in the engagement agreement or firm technology policy.
For example:
Our firm may use secure technology tools, including artificial intelligence tools, to assist with drafting, document review, workflow management, research support, and administrative tasks. Attorney judgment, review, and supervision remain part of the legal services provided. We do not use public AI tools with confidential client information unless appropriate safeguards are in place.
That language gives the firm flexibility while making clear that AI is a support tool, not a replacement for legal judgment.
A Simple AI Policy Framework for Law Firms
A practical law firm AI policy should answer these questions:
- What AI tools are approved?
- What AI tools are prohibited?
- What information may never be entered into AI tools?
- Who is allowed to use AI for legal work?
- What AI-assisted work requires attorney review?
- How are citations and legal authorities verified?
- When must AI use be disclosed to clients?
- How are staff trained?
- Who enforces the policy?
- What happens if an AI error is found?
This does not need to be a 50-page document.
But it does need to be written down.
An unwritten policy is not a policy. It is just a hope that everyone uses good judgment.
The Bottom Line
AI can help law firms move faster.
It can reduce administrative work, improve drafting speed, summarize information, and support better workflows.
But AI also creates risk when lawyers use it without review.
Banning AI entirely is not realistic. Ignoring it is not an option. Controlling it is.
Law firms should approve the right tools, protect client information, train their teams, verify legal output, and require attorney judgment before anything goes to a client, opposing counsel, or a court.
AI can assist legal work.
It cannot replace the lawyer's duty to get it right.
References
- ABA Formal Opinion 512 / ABA coverage of generative AI guidance for lawyers (Reuters)
- Noland v. Land of the Free, L.P., California Court of Appeal, sanctions for fabricated AI-generated citations (Justia)
- Chicago Sun-Times coverage of Jordan v. Chicago Housing Authority AI-related sanctions
- Reuters coverage of Lexos Media IP LLC v. Overstock.com Inc., District of Kansas sanctions
- Bloomberg Law coverage of Texas attorney Lawrence Chang sanction
- Damien Charlotin AI Hallucination Cases Database
- State Bar of California Generative AI Practical Guidance
- New York State Bar Association Task Force on Artificial Intelligence report
- Maryland State Bar Association AI guidance coverage (Maryland Daily Record)
- Legal Newsline coverage of Jordan v. Chicago Housing Authority sanctions order
- California State Bar Ethics & Technology Resources
- Reuters coverage of NYSBA AI guidance