Ethical Use of AI in Law Firms: How Lawyers Can Use AI Without Creating Ethics Problems

Published April 25, 2026 | By Iron Noodle Team | 15 min read

AI is already part of legal work.

Lawyers are using tools like ChatGPT, Gemini, Copilot, and legal-specific AI platforms to draft first versions of documents, summarize records, organize facts, prepare client communications, review discovery, and improve internal workflows.

That does not mean AI is replacing lawyers.

It means lawyers now need to understand how to use these tools safely.

Using AI is not the risk. Using AI output without checking it -- that is the risk. Courts have already sanctioned attorneys for filing documents that included fabricated cases, false quotations, and legal citations nobody bothered to verify. Bar associations are making the same point: AI can assist with legal work, but it does not erase a lawyer's professional duties. (Reuters)

For law firms, the question is no longer whether AI will be used. It already is.

What matters now is whether your firm has clear rules before someone makes an expensive mistake.

The Real Problem: AI Can Sound Correct Even When It Is Wrong

Generative AI tools are very good at producing polished, confident writing.

Useful for first drafts, summaries, outlines, checklists, and internal brainstorming. Dangerous when the tool is asked to provide case law, quote legal authority, or support a legal argument without human review.

AI tools can produce "hallucinations." In legal work, that can mean:

And the output looks professional even when it is dead wrong.

Lawyers cannot rely on AI-generated legal research without independently verifying it. In Noland v. Land of the Free, L.P., the California Court of Appeal sanctioned counsel $10,000 after appellate briefs included fabricated AI-generated citations. (Justia)

Simple rule: if a lawyer signs it, files it, sends it, or relies on it, the lawyer owns it.

Courts Are Already Sanctioning Lawyers for AI Mistakes

Recent cases show how quickly AI misuse can turn into a professional responsibility issue.

In Illinois, a Cook County judge sanctioned a lawyer and law firm nearly $60,000 after a court filing included a fake case generated by ChatGPT. The sanction was split between a $49,500 sanction against the firm and a $10,000 sanction against the attorney. (Chicago Sun-Times)

In Kansas, a federal judge fined five lawyers a total of $12,000 after filings in a patent case included AI-generated false quotations and citations. The court emphasized that the attorneys who signed the filings had a responsibility to verify them, even though only one lawyer used ChatGPT to prepare the material. (Reuters)

In Texas, attorney Lawrence Chang was ordered to pay $2,000 in attorneys' fees and expenses and write a letter explaining where his AI use went wrong and how he would prevent the issue in the future. (Bloomberg Law)

In California, the appellate court in Noland imposed a $10,000 sanction for fabricated AI-generated citations. (Justia)

And this problem is still growing. Public tracking efforts now list many legal matters involving alleged or confirmed AI-generated hallucinations in court filings. Damien Charlotin's AI Hallucination Cases Database tracks matters where courts or tribunals addressed AI hallucinations in more than a passing way. (Damien Charlotin)

Same pattern every time.

Nobody got sanctioned for using AI. They got sanctioned for not checking the output before filing it.

Is Your Firm Using AI Safely?

We help law firms build AI workflows with human review, confidentiality controls, and compliance built in from day one.

Book a Free Consultation

The ABA's Guidance: Existing Ethics Rules Still Apply

ABA Formal Opinion 512 dropped July 29, 2024. It does not invent new ethics rules for AI. It just explains how the duties you already have apply when you use generative AI tools. (Reuters)

Here is what it covers:

Competence

Lawyers must understand the benefits and risks of the technology they use. That does not mean every lawyer has to become a programmer. But it does mean lawyers should understand that generative AI can make mistakes, fabricate sources, mishandle confidential information, or produce incomplete work.

Confidentiality

Lawyers must protect client information. Before entering client facts, documents, names, financial details, medical records, discovery, or privileged communications into an AI tool, a firm should know how that tool stores, uses, and protects the data.

Communication

In some situations, lawyers may need to tell clients that AI is being used. Especially when AI materially affects the representation, the cost, the confidentiality risk, or how the client makes decisions about their case.

Candor to the Court

A lawyer cannot file false information with a court. That includes fake citations, inaccurate quotations, and legal arguments based on authorities the lawyer has not personally reviewed.

Supervision

Partners and supervising attorneys must make sure associates, paralegals, assistants, contractors, and vendors use AI in a way that complies with the lawyer's professional obligations.

In plain English: AI can assist the lawyer, but it cannot become the lawyer.

State Bars Are Moving in the Same Direction

State bars are also issuing guidance.

The State Bar of California has published practical guidance on the use of generative AI in legal practice. That guidance addresses issues such as competence, confidentiality, communication, candor, supervision, billing, and discrimination. It also warns lawyers to review AI outputs carefully and avoid entering confidential information into tools without proper safeguards. (The State Bar of California)

The New York State Bar Association released a report and recommendations on AI and the legal profession in 2024. Reuters reported that the NYSBA guidance encouraged lawyers to protect client confidentiality, avoid complete reliance on AI-generated information, consider client disclosure, and continue AI education. (New York State Bar Association)

Maryland guidance has also stressed the need for human fact-checking, confidentiality protection, and lawyer responsibility when using generative AI. (Maryland Daily Record)

Direction is consistent across jurisdictions: use AI if you want, but you own the result.

Practical AI Rules Every Law Firm Should Adopt

A law firm does not need a complicated AI policy to start.

But it does need a written policy.

At minimum, every law firm should have clear rules in these areas.

1. Decide Which AI Tools Are Approved

Do not leave every employee to pick whatever AI tool they find on Google.

Decide which tools are approved, which are allowed only for non-confidential work, and which are off-limits entirely.

Before approving any tool, ask:

Legal-specific AI platforms may offer stronger privacy, security, or workflow controls than consumer tools, but firms still need to review the actual vendor terms. Do not assume a tool is safe just because it is marketed to lawyers.

2. Ban Unverified AI Citations

This should be a bright-line rule.

No AI-generated case citation, statute, quotation, or legal proposition should be used unless a lawyer verifies it against a reliable source.

That means the lawyer or assigned reviewer must actually check the authority.

Asking the AI whether the case is real does not count.

Asking the AI to "double-check" does not count.

A clean-looking citation format does not mean the case exists.

If the authority is going into a court filing, demand letter, opinion letter, settlement analysis, or client advice, it needs human verification.

3. Keep Confidential Client Information Out of Public AI Tools

Unless the firm has confirmed that an AI tool is approved for confidential work, staff should not enter client information into it.

That includes:

A safer rule is this:

Use public AI tools only with general, anonymized, or fictional information.

If the firm wants to use AI with real client files, it should use approved tools with reviewed privacy terms, access controls, and security protections.

4. Use AI for Drafting, Not Final Judgment

AI is useful for creating a starting point.

It can help draft:

But the final judgment must come from a lawyer.

A lawyer decides what is accurate, what is relevant, what is privileged, what is strategic, and what gets cut.

AI can help organize legal work.

It cannot exercise legal judgment.

5. Train the Entire Team

AI risk is not limited to attorneys.

Paralegals, intake staff, marketing staff, legal assistants, contract workers, and outside vendors may all use AI in ways that affect the firm.

Training should cover:

Do not assume staff understand the risks. Most people see polished AI output and assume it must be accurate. That assumption has cost firms six figures in sanctions already.

6. Create a Review Process for Court Filings

Any filing that involved AI should go through a clear review process before submission.

Before hitting file, the responsible attorney should confirm:

Build this into the filing checklist. Not as an informal "someone should probably look at this" step. As an actual gate that stops a document from going to the court without sign-off.

Need Help Building Your Firm's AI Policy?

Iron Noodle builds AI-powered systems for law firms with compliance, confidentiality, and human review built into every workflow.

Talk to Our Team

Should Lawyers Disclose AI Use to Clients?

Sometimes, yes.

Not every use of AI requires a separate client disclosure. For example, using AI to clean up an internal checklist or summarize non-confidential administrative notes may not require a specific conversation with the client.

But disclosure may be appropriate when AI use could materially affect:

A practical approach is to include AI language in the engagement agreement or firm technology policy.

For example:

Our firm may use secure technology tools, including artificial intelligence tools, to assist with drafting, document review, workflow management, research support, and administrative tasks. Attorney judgment, review, and supervision remain part of the legal services provided. We do not use public AI tools with confidential client information unless appropriate safeguards are in place.

That language gives the firm flexibility while making clear that AI is a support tool, not a replacement for legal judgment.

A Simple AI Policy Framework for Law Firms

A practical law firm AI policy should answer these questions:

  1. What AI tools are approved?
  2. What AI tools are prohibited?
  3. What information may never be entered into AI tools?
  4. Who is allowed to use AI for legal work?
  5. What AI-assisted work requires attorney review?
  6. How are citations and legal authorities verified?
  7. When must AI use be disclosed to clients?
  8. How are staff trained?
  9. Who enforces the policy?
  10. What happens if an AI error is found?

This does not need to be a 50-page document.

But it does need to be written down.

An unwritten policy is not a policy. It is just a hope that everyone uses good judgment.

The Bottom Line

AI can help law firms move faster.

It can reduce administrative work, improve drafting speed, summarize information, and support better workflows.

But AI also creates risk when lawyers use it without review.

Banning AI entirely is not realistic. Ignoring it is not an option. Controlling it is.

Law firms should approve the right tools, protect client information, train their teams, verify legal output, and require attorney judgment before anything goes to a client, opposing counsel, or a court.

AI can assist legal work.

It cannot replace the lawyer's duty to get it right.

References

  1. ABA Formal Opinion 512 / ABA coverage of generative AI guidance for lawyers (Reuters)
  2. Noland v. Land of the Free, L.P., California Court of Appeal, sanctions for fabricated AI-generated citations (Justia)
  3. Chicago Sun-Times coverage of Jordan v. Chicago Housing Authority AI-related sanctions
  4. Reuters coverage of Lexos Media IP LLC v. Overstock.com Inc., District of Kansas sanctions
  5. Bloomberg Law coverage of Texas attorney Lawrence Chang sanction
  6. Damien Charlotin AI Hallucination Cases Database
  7. State Bar of California Generative AI Practical Guidance
  8. New York State Bar Association Task Force on Artificial Intelligence report
  9. Maryland State Bar Association AI guidance coverage (Maryland Daily Record)
  10. Legal Newsline coverage of Jordan v. Chicago Housing Authority sanctions order
  11. California State Bar Ethics & Technology Resources
  12. Reuters coverage of NYSBA AI guidance

Want to See How AI Works for Your Firm?

Book a free consultation and we will show you exactly where your firm is leaving money on the table.

Book Consultation