Mandatory AI Disclosures: Enforcing A Uniform Standard

Mandatory AI Disclosures: Enforcing A Uniform Standard

By Taylor Hella

            As the use of generative artificial intelligence (AI) expands, society will experience both significant benefits and serious repercussions, underscoring the need for legal mandates that strike a balanced middle ground. In 2023, an attorney submitted a brief filled with fabricated case law generated by ChatGPT.[1] Judge Brantley Starr of the Northern District of Texas issued an order requiring attorneys to certify that either no portion of their filing relied on generative AI or that any AI-drafted language had been independently verified with traditional legal research tools.[2] Scholars observe that such orders reflect the growing unease with unchecked AI in legal practice.[3] This reaction marked the beginning of a broader trend: courts are moving toward mandatory disclosure of AI use to preserve the legitimacy of filings.[4] That approach should not remain confined to scattered local rules. The legal profession now needs a uniform standard. Only a consistent, nationwide disclosure regime will ensure accuracy, accountability, and transparency as AI reshapes the practice of law.

            The Model Rules of Professional Conduct impose a longstanding duty on lawyers to disclose adverse authority, correct material misstatements, and avoid misleading courts through omission.[5] Mandatory AI disclosure is not a departure from these duties but their logical extension. Just as courts historically demanded clarity about sources of law, they now have reason to demand clarity about whether non-human systems contributed to the drafting of legal work. James Coben’s research on mediation confidentiality shows how disclosure frameworks balances transparency with competing values, and AI regulation will require similar calibration.[6]

            Today, the framework governing AI remains fragmented among courts and legislatures. The Northern District of Texas requires a statement on the first page of any AI-assisted filing.[7] Missouri’s 20th Judicial Circuit demands disclosure of the specific AI tool used.[8] Washington’s Clallam County District Court insists attorneys certify the role of AI in filings.[9] Legislatures are now following suit. California requires providers to tell patients when AI generates clinical communications.[10] Starting February of 2026, Colorado will mandate disclosure whenever consumers interact with AI, unless the fact is obvious to a reasonable person.[11] Utah imposes disclosure requirements on both law enforcement and regulated services.[12] These rules providing disclosure is no longer experimental, but they also reveal inconsistency. A lawyer practicing across states faces a compliance maze, undermining predictability and frustrating the very goals of regulation.

            Judges justify disclosure rules on three grounds. First, accuracy: human review of submissions remains essential to keep AI-generated hallucinations from reaching the courts.[13] Second, accountability: disclosure clarifies that responsibility rests with the attorney, not the software.[14] Third, transparency: clients and courts deserve to know when AI has shaped filings.[15]

            These rationales mirror ethical frameworks beyond law. The Belmont Report identifies respect, beneficence, and justice as guiding principles for the protection of human subjects in research.[16] Respect demands transparency, beneficence ensures technology serves human well-being, and justice requires risks and benefits to be fairly distributed.[17] Likewise, scholars argue that disclosure promotes accountability and institutional legitimacy by revealing the reasoning and sources underlying decisions made with AI.[18]

            The argument for a uniform standard also gains strength by analogy. Campaign finance law requires disclosures on political advertising to reveal its source.[19] Consumer protection statutes require labels on food, drugs, and financial products to prevent deception.[20] In each case, transparency sustains public trust. The same principle applies here: without disclosure, confidence in legal institutions erodes.

            Still, disclosure has detractors. Poorly tailored rules can sweep too broadly. For example, treating Grammarly the same as ChatGPT risks stigmatizing low-risk tools and trivializing disclosure, making compliance more performative than meaningful. Scholars such as Kaminski & Malgieri warn that regimes narrowly focused on ex post ‘explanations’ risk devolving into checkbox compliance rather than meaningful accountability; as they note, the “current focus on the right to explanation is far too narrow.”[21]

            Inconsistency adds further strain. Courts and legislatures impose divergent standards, creating uncertainty for attorneys and developers alike. California’s AI Training Data Transparency Act, for example, requires publication of training data summaries, a measure critics say jeopardizes proprietary information without enhancing consumer understanding.[22] Without nuance, disclosure rules chill innovation: lawyers and companies may avoid AI altogether to dodge compliance risks.

            Despite these challenges, the solution is not retreat but reform. Disclosure must become mandatory nationwide. Congress should adopt a uniform standard for legal filings and regulated services.[23] A national framework would displace the patchwork of state rules, eliminate compliance traps, and ensure consistent enforcement.[24] Dispute resolution principles emphasize uniform procedures that reduce inequities among litigants; AI disclosure demands the same harmonization.[25]

            That uniform standard must also be calibrated. Editing tools such as spell-check should not trigger disclosure.[26] Instead, disclosure should focus on high-risk uses: generating legal arguments, creating factual claims, or producing communications a reasonable person could mistake for human authored. This targeted approach avoids burdening attorneys and businesses with trivial certifications while still capturing uses that implicate accuracy, accountability, and trust.

            Mandatory disclosure, if enacted as a uniform standard, will not stifle responsible AI adoption.[27] Lawyers will remain free to use AI tools, but disclosure will ensure they remain accountable for results.[28] Developers will continue to innovate, but disclosure will reassure regulators and the public that AI is not unchecked.[29] Consumers will benefit from honesty that enables informed choices.[30] Properly designed, a uniform standard strengthens both innovation and integrity.

            The need for uniformity grows stronger as AI use accelerates. Courts, legislatures, and agencies already recognize the dangers of undisclosed AI.[31] Without disclosure, litigants risk fabricated citations, courts risk distorted evidence, and consumers risk deception.[32] With disclosure, all parties gain transparency and responsibility. The choice is not innovation versus integrity but fragmentation versus coherence. Only a uniform standard offers both innovation and integrity.

            Disclosure must function as a uniform standard: clear, consistent, and nuanced. A federal rule balancing transparency with practicality can achieve that goal. Anything less leaves the profession vulnerable to inconsistency and eroded public trust. Mandatory AI disclosures should not be optional or situational, they must be universal. Only then will disclosure fulfill its promise: not as bureaucratic formality, but as a uniform standard worthy of the institutions it seeks to protect.

[1] See Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023).

[2] Judge Brantley Starr, Judge Specific Requirements, Mandatory Certification Regarding Generative Artificial Intelligence (May 30, 2023), https://www.txnd.uscourts.gov/sites/default/files/documents/CertReStarrJSR.doc.

[3] Juan Perla, Aubre Dean, & Steven Thomas, Federal Courts and AI Standing Orders: Safety or Overkill?, Law360 (Jan. 16, 2024, 5:40 PM), https://www.law360.com/tax-authority/articles/1785579.

[4] Id.

[5] Model Rules of Pro. Conduct r. 3.3 (Am. Bar Ass’n 2020).

[6] James R. Coben, My Change of Mind on the Uniform Mediation Act, 23 Disp. Resol. Mag. 6, 8–10 (Winter 2017).

[7] N.D. Tex. Civ. R. 7.2(f).

[8] Mo. 20th Jud. Cir. R. 17.

[9] Wash. Clallam Dist. Ct. II LARLJ 49.

[10] Cal. Health & Safety Code Ann. § 1339.75(a) (West).

[11] Colo. Rev. Stat. Ann. § 6-1-1704 (West).

[12] Utah Code Ann. § 13-77-103 (West).

[13] Perla, Dean, & Thomas, supra note 3.

[14] Id.

[15] Id.

[16] 45 C.F.R. § 46 (2023).

[17] Id.

[18] Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 UC Davis L. Rev. 399, 429–31 (2017).

[19] 52 U.S.C. § 30120(a).

[20] See 21 U.S.C. § 343 (explaining instances where food may be deemed ‘misbranded.’).

[21] Margot E. Kaminski & Gianclaudio Malgieri, Algorithmic Impact Assessments Under the GDPR: Producing Multi-Layered Explanations, 11 Int’l Data Priv. L. 125, 134 (2021) (noting that the existing “right to explanation” framework is too narrow and calling for more expansive, multi-layered disclosure regimes).

[22] Cal. Civ. Code Ann. § 3111 (West).

[23] Perla, Dean & Thomas, supra note 3.

[24] Id.

[25] Carrie Menkel-Meadow et al., Dispute Resolution: Beyond the Adversarial Model 291–93 (4th ed. 2025).

[26] N.D. Cent. Code Ann. § 16.1-10-04.2 (West) (providing that “artificial intelligence” excludes tools explicitly programmed for grammar, spelling, or word-suggestion assistance, clarifying that basic editing software is not subject to disclosure).

[27] Perla, Dean & Thomas, supra note 3.

[28] Id.

[29] Id.

[30] Id.

[31] Id.

[32] Mata, 678 F. Supp. 3d at 448.

Error: Only up to 6 widgets are supported in this layout. If you need more add your own layout.

Submissions The Vermont Law Review continually seeks articles, commentaries, essays, and book reviews on any subject concerning recent developments in state, federal, Native American, or international law.

Learn more about the submissions process >