Solve complex legal tasks with surprising accuracy. With Spellbook you get:
The Mata v. Avianca case in June 2023 changed everything for lawyers using technology. A federal court sanctioned attorneys who submitted a brief containing fabricated case citations generated by an AI tool.
The lawyers failed to verify the authorities before filing and later admitted they had relied on AI-generated research that turned out to be entirely fictional. The incident quickly became a national flashpoint for judicial concern about AI in legal practice.
In the months that followed, judges and bar associations across the United States issued standing orders, local rules, and guidance addressing when (and how) lawyers must disclose the use of AI in court filings. But the rules vary dramatically across jurisdictions, leaving many practitioners uncertain about their obligations.
This guide provides a practical overview of AI disclosure requirements for lawyers. We explain where mandatory disclosure rules apply, and offer clarity for transactional lawyers and in-house counsel using AI-assisted tools.
[cta-1]
Courts that require AI disclosure tend to focus on a few practical details rather than broad explanations of how the technology works. The goal is to understand how AI was used and to preserve human accountability for the final work product.
Several courts require lawyers to disclose which AI tool was used. Lawyers must specify "ChatGPT-4," "Claude," or "Spellbook" rather than generic references to "AI software."
This requirement emphasizes transparency about the technology’s capabilities and limitations. Naming the tool helps the court assess the risk profile of the assistance used to prepare the work product.
Some standing orders require lawyers to document which portions of a filing were drafted or assisted by AI. This can be as specific as "Sections II and III" or as general as "portions of legal research and analysis." It is a good practice to maintain internal attribution documentation in case the court requests clarification after filing.
To ensure transparency in work processes, some districts require attorneys to certify that they have verified every statement and citation in the filing, regardless of whether AI was used. Lawyers must review citations, quotations, and legal conclusions to validate the accuracy of AI output.
Disclosure must be transparent, documented, and conspicuous, not buried in footnotes. Some courts require certification at the time of filing, while others allow a concurrent notice or separate certification.
Judges did not begin issuing AI disclosure rules in a vacuum. These requirements emerged in response to real filing errors and growing concerns about accuracy and accountability.
In Mata v. Avianca, an attorney’s ChatGPT-generated 10-page brief included fake cases with invented citations and quotes. When questioned by the court, the AI falsely confirmed the cases were real. The attorney filed the brief anyway.
Manhattan federal Judge P. Kevin Castel imposed a $5,000 fine on attorneys Steven Schwartz and Peter LoDuca, along with their firm Levidow, Levidow & Oberman. The case highlighted the dangers of unverified AI output and sparked a nationwide wave of judicial scrutiny.
AI systems can generate convincing but completely incorrect information, a phenomenon often called hallucinations. They can be fabricated authorities, incorrect case summaries, or even fictional procedural details.
Judges have repeatedly emphasized that AI should only support lawyers, not replace their professional judgment. No matter how advanced they are, these tools aren't built to handle the precision and nuance that legal work demands.
Rule 11 requires attorneys to certify that filings have evidentiary support and are legally grounded. Courts have made it clear that “I used AI” is not a defense to a Rule 11 violation (Fifth Circuit language). You must inform the court if you are in a jurisdiction with a "Standing Order" on AI (common in many Texas districts and the Northern District of Illinois). These orders require a signed "AI Certification."
If there is no specific local AI rule, you typically do not have to affirmatively state "I used AI." However, you must be able to prove a "reasonable inquiry" into the law. In 2026, failing to check an AI-generated citation is universally defined as a failure of reasonable inquiry.
State bar rules are not uniform and often vary by court or judge rather than by a nationwide standard. Some judges require explicit certification when AI is used, while others rely on existing procedural and ethical obligations. Below, we explain how courts across the United States are approaching AI disclosure today.
Texas (Northern District)
Judge Brantley Starr requires lawyers to certify, at the time they appear before the court, whether generative AI was used in preparing filings. If AI tools were used, lawyers must confirm that a human verified all statements and citations. The court’s concern is accuracy and that the responsibility remains with counsel.
Texas (Eastern District)
Rather than imposing a strict certification rule, the Eastern District of Texas amended its local rules to remind lawyers that AI use does not change their Rule 11 obligations. The court emphasizes a "trust, but verify" approach, especially when technology is involved.
Pennsylvania (Eastern District)
Judge Michael Baylson requires lawyers to affirmatively disclose any use of generative AI in preparing court filings. The disclosure must clearly explain that AI was involved and confirm that all citations and legal authorities have been verified as accurate.
New Jersey (District of New Jersey)
In the District of New Jersey, Judge Evelyn Padin requires disclosure whenever AI is used in connection with court submissions. Lawyers must identify the specific tool used, describe which parts of the filing were affected, and certify that a human attorney reviewed the content.
North Carolina (Western District & State Courts)
The Charlotte Division's June 2024 standing order requires certification that either no generative AI was used (with exceptions for standard legal research platforms like Westlaw and Lexis) or that every statement and citation was verified by a human. This effectively restricts the use of ChatGPT-type tools without prior court approval. These rules apply to both lawyers and pro se litigants.
Illinois (Federal Courts vs. State Courts)
Judges in the Northern District of Illinois have adopted differing approaches. Some magistrate judges require disclosure if AI was used for legal research or drafting, while others emphasize that reliance on AI does not excuse errors under Rule 11.
At the same time, the Illinois Supreme Court has discouraged state judges from imposing blanket disclosure requirements, signaling a more permissive stance at the state level.
California (Northern District)
Judges in the Northern District of California generally allow AI use but emphasize careful documentation and review. Magistrate Judge Kang requires clear identification of AI-assisted documents through notation in the title, preliminary table, or separate concurrent notice. Judge Rita F. Lin similarly allows AI use but emphasizes that attorneys "alone bear ethical responsibility" for all filing statements.
Michigan (Eastern District - Proposed)
The Eastern District of Michigan has proposed a local rule requiring lawyers to disclose their use of generative AI in drafting court filings. Under the proposal, attorneys would need to certify that all citations were verified and that AI-generated language was reviewed for accuracy.
Some courts, including the Fifth Circuit, have considered AI disclosure rules but declined to adopt them. These courts generally rely on existing ethical duties and procedural rules to address AI-related risks.
Their position is that AI use does not reduce or alter those obligations in any meaningful way, as lawyers are already required to ensure accuracy and candor.
[cta-2]
Courts that require AI disclosure generally focus on whether lawyers clearly explain AI involvement and confirm human oversight. Please note that the samples provided are for general guidance only. Adapt to match local rules, standing orders, or judge-specific requirements.
"This filing was prepared with the assistance of an artificial intelligence-based tool, [AI Tool Name]. The undersigned attorney certifies that all content, including factual statements and legal citations, was independently reviewed and verified by a human attorney prior to filing. The attorney remains fully responsible for the accuracy and substance of this submission."
"Pursuant to [Judge Name]’s standing order, the undersigned certifies that generative AI was [used / not used] in preparing this filing. If used, all AI-assisted content was reviewed, edited, and verified for accuracy by a licensed attorney prior to submission."
Many firms also maintain internal records to support compliance if disclosure is later requested. These templates are often embedded in filing checklists or maintained alongside drafting tools, such as a firm’s clause library, and include:
Ethical Obligations Beyond Court Requirements
Even when courts do not impose specific AI disclosure rules, lawyers must still adhere to professional standards. Disclosure alone isn't enough to replace legal judgment and careful supervision.
Lawyers must understand the tools and AI systems they rely on, including their limitations and potential risks. Using AI without knowing how it works or how to check its output can raise ethical concerns.
A frequent worry is whether these AI tools protect confidential information. Jumping into using AI tools without understanding potential privacy issues can expose sensitive client details to third parties. Check how an AI tool stores and uses client data. Does that information get saved or shared? Who can access it?
Supervisory responsibilities apply as well. Partners and managers must ensure that AI-assisted work is properly reviewed and that junior lawyers are trained on safe use. This means establishing clear guidelines and training requirements.
Lawyers must not submit false or misleading information to a court, even unintentionally. If AI-assisted content contains errors, lawyers have a duty to correct them as soon as possible. "Knowingly" includes willful blindness. You cannot ignore red flags in AI output.
Consequences of Non-Disclosure or Inadequate Disclosure
When courts require disclosure of AI use, and even outside formal disclosure rules, undisclosed or poorly managed AI use can create real problems, right away and down the road, including:
Opposing counsel may uncover AI involvement through document metadata, drafting patterns, or internal records. When AI use surfaces unexpectedly, courts may question the lawyer’s transparency, even if the filing itself was accurate.
Courts may strike filings, impose monetary sanctions, or order attorneys to pay opposing counsel’s fees. Judges may also refer lawyers to disciplinary authorities.
Not maintaining proper oversight of AI tools can lead to malpractice lawsuits or complaints with your state bar. Disclosure regimes reduce professional liability risk. Clients may also lose confidence in you if they find out you used AI without proper checks and supervision.
In certain situations, judges may view your arguments more skeptically, limit what you can argue, or watch your future filings more closely. These consequences can hurt both your present case and your professional standing going forward.
For transactional lawyers and in-house counsel, Spellbook offers the same quality controls and professional standards that apply to any legal drafting process. You just work faster and more efficiently. Spellbook is:
Built for compliance
Clear documentation
Attorney-controlled output
Enterprise-grade security
Learn how Spellbook simplifies AI disclosure compliance.
[cta-3]
Yes, sometimes. Several courts treat AI-assisted research the same as drafting and require disclosure if AI influenced the filing. When guidance is unclear, disclosing substantive AI involvement is generally the safest approach.
Even without specific rules, lawyers must still comply with ethical rules such as Rule 11. Courts expect accuracy, verification, and candor regardless of AI use. Lawyers should monitor judge-specific orders and be prepared to disclose if circumstances change. Some insurance companies are now making AI-related malpractice coverage contingent on the firm having a written "AI Use Policy," regardless of whether the state bar requires one.
Typically, no. Most courts that require AI disclosure expect certification tied to the specific filing. General firm policies or engagement-letter language typically do not satisfy disclosure requirements.
It depends on how a court defines “generative AI.” Some judges exclude established legal research platforms, while others focus on whether the tool generates text or analysis. When a rule is ambiguous, disclosure or clarification is recommended.
Lawyers must promptly correct the record. Courts expect immediate notice, explanation of the error, and submission of corrected information. Early, voluntary correction can reduce the risk of sanctions or adverse credibility findings.
Often, yes. Courts have explicitly applied AI disclosure and verification requirements to pro se parties. While judges may allow some flexibility, accuracy and honesty remain mandatory, and AI use does not excuse false or unsupported filings.
ChatGPT | Claude | Perplexity | Grok | Google AI Mode




Get the latest news, trends, and tactics in legal Al—straight to your inbox.

Get 270+ clause benchmarks across 13 agreement types. Plus, read our full analysis on the future of data-driven negotiation.
.png)
Join 4,000+ law firms and in-house teams using Spellbook, the most complete legal AI suite, to automate contract review and reduce risk directly in Microsoft Word.
Thank you for your interest! Our team will reach out to further understand your use case.