Get ChatGPT for Law

Solve complex legal tasks with surprising accuracy. With Spellbook you get:

Lightning-fast processing speed
Streamlined and precise deal review

Negotiation-ready clauses & language

Up-to-date market benchmarks
Try Spellbook Free
Works directly in Word
Close modal

When Must Lawyers Disclose AI Use? Court Requirements for Legal Work

Last updated: Mar 10, 2026
Written by
Niko Pajkovic
Niko Pajkovic
When Must Lawyers Disclose AI Use? Court Requirements for Legal Work

The Mata v. Avianca case in June 2023 changed everything for lawyers using technology. A federal court sanctioned attorneys who submitted a brief containing fabricated case citations generated by an AI tool.

The lawyers failed to verify the authorities before filing and later admitted they had relied on AI-generated research that turned out to be entirely fictional. The incident quickly became a national flashpoint for judicial concern about AI in legal practice.

In the months that followed, judges and bar associations across the United States issued standing orders, local rules, and guidance addressing when (and how) lawyers must disclose the use of AI in court filings. But the rules vary dramatically across jurisdictions, leaving many practitioners uncertain about their obligations. 

This guide provides a practical overview of AI disclosure requirements for lawyers. We explain where mandatory disclosure rules apply, and offer clarity for transactional lawyers and in-house counsel using AI-assisted tools.

Key Takeaways

  • Courts handle AI disclosure differently. Some require specific certifications, while others have yet to mandate any new rules.
  • Required disclosures typically include naming the AI tool, noting which sections it helped with, and confirming you reviewed and verified its output.
  • Attorneys always remain fully responsible for all AI-generated content, regardless of disclosure requirements.

[cta-1]

What Must Be Disclosed: Core Compliance Requirements

Courts that require AI disclosure tend to focus on a few practical details rather than broad explanations of how the technology works. The goal is to understand how AI was used and to preserve human accountability for the final work product.

Identifying the AI Tool Used

Several courts require lawyers to disclose which AI tool was used. Lawyers must specify "ChatGPT-4," "Claude," or "Spellbook" rather than generic references to "AI software." 

This requirement emphasizes transparency about the technology’s capabilities and limitations. Naming the tool helps the court assess the risk profile of the assistance used to prepare the work product.

Describing Which Sections Were AI-Generated

Some standing orders require lawyers to document which portions of a filing were drafted or assisted by AI. This can be as specific as "Sections II and III" or as general as "portions of legal research and analysis." It is a good practice to maintain internal attribution documentation in case the court requests clarification after filing.

Certifying Human Review and Verification

To ensure transparency in work processes, some districts require attorneys to certify that they have verified every statement and citation in the filing, regardless of whether AI was used. Lawyers must review citations, quotations, and legal conclusions to validate the accuracy of AI output.

Timeline and Format Requirements

Disclosure must be transparent, documented, and conspicuous, not buried in footnotes. Some courts require certification at the time of filing, while others allow a concurrent notice or separate certification.

Why Courts Require AI Disclosure in Legal Filings

Judges did not begin issuing AI disclosure rules in a vacuum. These requirements emerged in response to real filing errors and growing concerns about accuracy and accountability.

The Mata v. Avianca Case and Its Aftermath

In Mata v. Avianca, an attorney’s ChatGPT-generated 10-page brief included fake cases with invented citations and quotes. When questioned by the court, the AI falsely confirmed the cases were real. The attorney filed the brief anyway. 

Manhattan federal Judge P. Kevin Castel imposed a $5,000 fine on attorneys Steven Schwartz and Peter LoDuca, along with their firm Levidow, Levidow & Oberman. The case highlighted the dangers of unverified AI output and sparked a nationwide wave of judicial scrutiny.

AI "Hallucinations" and the Reliability Problem

AI systems can generate convincing but completely incorrect information, a phenomenon often called hallucinations. They can be fabricated authorities, incorrect case summaries, or even fictional procedural details.

Judges have repeatedly emphasized that AI should only support lawyers, not replace their professional judgment. No matter how advanced they are, these tools aren't built to handle the precision and nuance that legal work demands.

Candor to the Tribunal Under Rule 11

Rule 11 requires attorneys to certify that filings have evidentiary support and are legally grounded. Courts have made it clear that “I used AI” is not a defense to a Rule 11 violation (Fifth Circuit language). You must inform the court if you are in a jurisdiction with a "Standing Order" on AI (common in many Texas districts and the Northern District of Illinois). These orders require a signed "AI Certification." 

If there is no specific local AI rule, you typically do not have to affirmatively state "I used AI." However, you must be able to prove a "reasonable inquiry" into the law. In 2026, failing to check an AI-generated citation is universally defined as a failure of reasonable inquiry.

Which Courts Require AI Disclosure? A Jurisdiction-by-Jurisdiction Guide

State bar rules are not uniform and often vary by court or judge rather than by a nationwide standard. Some judges require explicit certification when AI is used, while others rely on existing procedural and ethical obligations. Below, we explain how courts across the United States are approaching AI disclosure today.

Federal Courts With Mandatory Disclosure Rules

Texas (Northern District)

Judge Brantley Starr requires lawyers to certify, at the time they appear before the court, whether generative AI was used in preparing filings. If AI tools were used, lawyers must confirm that a human verified all statements and citations. The court’s concern is accuracy and that the responsibility remains with counsel. 

Texas (Eastern District)

Rather than imposing a strict certification rule, the Eastern District of Texas amended its local rules to remind lawyers that AI use does not change their Rule 11 obligations. The court emphasizes a "trust, but verify" approach, especially when technology is involved. 

Pennsylvania (Eastern District)

Judge Michael Baylson requires lawyers to affirmatively disclose any use of generative AI in preparing court filings. The disclosure must clearly explain that AI was involved and confirm that all citations and legal authorities have been verified as accurate.

New Jersey (District of New Jersey)

In the District of New Jersey, Judge Evelyn Padin requires disclosure whenever AI is used in connection with court submissions. Lawyers must identify the specific tool used, describe which parts of the filing were affected, and certify that a human attorney reviewed the content.

North Carolina (Western District & State Courts)

The Charlotte Division's June 2024 standing order requires certification that either no generative AI was used (with exceptions for standard legal research platforms like Westlaw and Lexis) or that every statement and citation was verified by a human. This effectively restricts the use of ChatGPT-type tools without prior court approval. These rules apply to both lawyers and pro se litigants.

Illinois (Federal Courts vs. State Courts)

Judges in the Northern District of Illinois have adopted differing approaches. Some magistrate judges require disclosure if AI was used for legal research or drafting, while others emphasize that reliance on AI does not excuse errors under Rule 11. 

At the same time, the Illinois Supreme Court has discouraged state judges from imposing blanket disclosure requirements, signaling a more permissive stance at the state level.

California (Northern District)

Judges in the Northern District of California generally allow AI use but emphasize careful documentation and review. Magistrate Judge Kang requires clear identification of AI-assisted documents through notation in the title, preliminary table, or separate concurrent notice. Judge Rita F. Lin similarly allows AI use but emphasizes that attorneys "alone bear ethical responsibility" for all filing statements.

Michigan (Eastern District - Proposed)

The Eastern District of Michigan has proposed a local rule requiring lawyers to disclose their use of generative AI in drafting court filings. Under the proposal, attorneys would need to certify that all citations were verified and that AI-generated language was reviewed for accuracy. 

Courts With General Guidance (No Specific Disclosure Mandate)

Some courts, including the Fifth Circuit, have considered AI disclosure rules but declined to adopt them. These courts generally rely on existing ethical duties and procedural rules to address AI-related risks. 

Their position is that AI use does not reduce or alter those obligations in any meaningful way, as lawyers are already required to ensure accuracy and candor.

[cta-2]

Sample Disclosure Language and Certification Templates

Courts that require AI disclosure generally focus on whether lawyers clearly explain AI involvement and confirm human oversight. Please note that the samples provided are for general guidance only. Adapt to match local rules, standing orders, or judge-specific requirements.

Standard Certification Template (Texas-Style)

"This filing was prepared with the assistance of an artificial intelligence-based tool, [AI Tool Name]. The undersigned attorney certifies that all content, including factual statements and legal citations, was independently reviewed and verified by a human attorney prior to filing. The attorney remains fully responsible for the accuracy and substance of this submission."

Judge-Specific Templates

"Pursuant to [Judge Name]’s standing order, the undersigned certifies that generative AI was [used / not used] in preparing this filing. If used, all AI-assisted content was reviewed, edited, and verified for accuracy by a licensed attorney prior to submission."

Internal Firm Templates

Many firms also maintain internal records to support compliance if disclosure is later requested. These templates are often embedded in filing checklists or maintained alongside drafting tools, such as a firm’s clause library, and include:

  • Internal AI Use Record (Not Filed Unless Required)
  • AI Tool Used: [Tool Name]
  • Date of Use: [Date]
  • Sections Assisted: [Sections or Description]
  • Verifying Attorney: [Name]

Ethical Obligations Beyond Court Requirements

Even when courts do not impose specific AI disclosure rules, lawyers must still adhere to professional standards. Disclosure alone isn't enough to replace legal judgment and careful supervision.

Model Rule 1.1: Duty of Competence

Lawyers must understand the tools and AI systems they rely on, including their limitations and potential risks. Using AI without knowing how it works or how to check its output can raise ethical concerns. 

Model Rule 1.6: Confidentiality Obligations

A frequent worry is whether these AI tools protect confidential information. Jumping into using AI tools without understanding potential privacy issues can expose sensitive client details to third parties. Check how an AI tool stores and uses client data. Does that information get saved or shared? Who can access it?

Model Rule 5.1: Supervisory Responsibilities

Supervisory responsibilities apply as well. Partners and managers must ensure that AI-assisted work is properly reviewed and that junior lawyers are trained on safe use. This means establishing clear guidelines and training requirements.

Model Rule 3.3: Candor to the Tribunal

Lawyers must not submit false or misleading information to a court, even unintentionally. If AI-assisted content contains errors, lawyers have a duty to correct them as soon as possible. "Knowingly" includes willful blindness. You cannot ignore red flags in AI output.

Consequences of Non-Disclosure or Inadequate Disclosure

When courts require disclosure of AI use, and even outside formal disclosure rules, undisclosed or poorly managed AI use can create real problems, right away and down the road, including:

Discovery of Undisclosed AI Use

Opposing counsel may uncover AI involvement through document metadata, drafting patterns, or internal records. When AI use surfaces unexpectedly, courts may question the lawyer’s transparency, even if the filing itself was accurate.

Rule 11 Sanctions and Monetary Penalties

Courts may strike filings, impose monetary sanctions, or order attorneys to pay opposing counsel’s fees. Judges may also refer lawyers to disciplinary authorities.

Malpractice and Professional Discipline Risks

Not maintaining proper oversight of AI tools can lead to malpractice lawsuits or complaints with your state bar.  Disclosure regimes reduce professional liability risk. Clients may also lose confidence in you if they find out you used AI without proper checks and supervision.

Case-Specific Consequences

In certain situations, judges may view your arguments more skeptically, limit what you can argue, or watch your future filings more closely. These consequences can hurt both your present case and your professional standing going forward.

How Spellbook Helps Lawyers Meet Disclosure Requirements

For transactional lawyers and in-house counsel, Spellbook offers the same quality controls and professional standards that apply to any legal drafting process. You just work faster and more efficiently. Spellbook is:

Built for compliance

  • Embedded directly in Word, with no external interfaces
  • Work product stays under attorney control at all times.

Clear documentation

  • Audit trails support the maintenance of internal records.
  • AI use can be documented when certification is required.

Attorney-controlled output

  • Spellbook suggests. You review, edit, and decide.
  • Legal judgment and responsibility remain with you,

Enterprise-grade security

  • Zero data retention policies and legal-grade data security measures
  • Protects confidentiality and attorney-client privilege

Learn how Spellbook simplifies AI disclosure compliance.

[cta-3]

Frequently Asked Questions

Do I Need to Disclose AI Use if I Used It for Research, Not Drafting?

Yes, sometimes. Several courts treat AI-assisted research the same as drafting and require disclosure if AI influenced the filing. When guidance is unclear, disclosing substantive AI involvement is generally the safest approach.

What if My Jurisdiction Hasn't Issued Any AI Disclosure Rules?

Even without specific rules, lawyers must still comply with ethical rules such as Rule 11. Courts expect accuracy, verification, and candor regardless of AI use. Lawyers should monitor judge-specific orders and be prepared to disclose if circumstances change. Some insurance companies are now making AI-related malpractice coverage contingent on the firm having a written "AI Use Policy," regardless of whether the state bar requires one.

Can I Use a General Disclaimer Instead of a Case-Specific Certification?

Typically, no. Most courts that require AI disclosure expect certification tied to the specific filing. General firm policies or engagement-letter language typically do not satisfy disclosure requirements.

Are AI-Enabled Research Platforms Like Westlaw Subject to Disclosure?

It depends on how a court defines “generative AI.” Some judges exclude established legal research platforms, while others focus on whether the tool generates text or analysis. When a rule is ambiguous, disclosure or clarification is recommended.

What Happens if I Discover AI-Generated Errors After Filing?

Lawyers must promptly correct the record. Courts expect immediate notice, explanation of the error, and submission of corrected information. Early, voluntary correction can reduce the risk of sanctions or adverse credibility findings.

Do Disclosure Requirements Apply to Pro Se Litigants?

Often, yes. Courts have explicitly applied AI disclosure and verification requirements to pro se parties. While judges may allow some flexibility, accuracy and honesty remain mandatory, and AI use does not excuse false or unsupported filings.

Ask LLMs About this Topic

ChatGPT | Claude | Perplexity | Grok | Google AI Mode

The Morning Paper for Lawyers Who ♥️ Al
NEWSLETTER
The Morning Paper for Lawyers Who ♥️ Al

Get the latest news, trends, and tactics in legal Al—straight to your inbox.

The 2026 State of Contract
GUIDE
The 2026 State of Contract

Get 270+ clause benchmarks across 13 agreement types. Plus, read our full analysis on the future of data-driven negotiation.

The Complete Legal AI Suite, Free
FREE TRIAL
The Complete Legal AI Suite, Free

Join 4,000+ law firms and in-house teams using Spellbook, the most complete legal AI suite, to automate contract review and reduce risk directly in Microsoft Word.

Download: When Must Lawyers Disclose AI Use? Court Requirements for Legal Work

Please enter your work email address (not gmail, yahoo, etc.)
*Required
Oops! Something went wrong while submitting the form.
Close modal

Start your free trial

Join 4,000 legal teams using Spellbook

please enter your business email (not gmail, yahoo, etc)
*Required

Thank you for your interest! Our team will reach out to further understand your use case.

Oops! Something went wrong while submitting the form.