
THE TECHNOLOGY BLIND SPOT
In July 2025, a federal judge in Alabama disqualified two attorneys and referred them to bar regulators. Their offense: submitting a brief containing AI-generated citations to cases that did not exist. The attorneys told the court they had relied on an AI tool to draft the filing. They never verified the output. The judge was unsparing: the attorneys had “abandoned their professional responsibilities.”
Two months earlier, a pair of lawyers representing MyPillow in a Colorado defamation case faced sanctions for a filing riddled with 24 errors, including fabricated case citations and hallucinated judicial holdings. Each attorney received a $3,000 fine. The court noted they had treated AI output as a finished product rather than a starting point for legal research.
These cases are not outliers. Federal courts have now documented over 600 instances of AI-generated hallucinations in legal filings nationwide, with the pace accelerating to roughly two or three new cases every day in 2025. Judges have imposed fines, disqualified counsel, referred attorneys to disciplinary authorities, and in one remarkable California case, sanctioned opposing counsel for failing to detect and report fabricated citations submitted by the other side.
The reaction from parts of the bar has been predictable: ban it. Prohibit attorneys from using AI tools entirely. Treat generative AI the way a previous generation treated the internet: as something too dangerous for lawyers to touch. This impulse is understandable. It is also exactly wrong.
The Direct Answer
The attorneys sanctioned in Alabama, Colorado, California, and hundreds of other courtrooms did not fail because they used AI. They failed because they used AI without competence. Banning AI makes the same fundamental error as using it uncritically. Both approaches dodge the actual obligation: understanding the technology well enough to use it responsibly.
Nobody bans cars because drivers cause accidents. Society requires training and licensure. Nobody lets untrained pilots fly commercial aircraft because autopilot exists. Aviation requires demonstrated competence before anyone touches the controls. ABA Model Rule 1.1, Comment 8 demands the same principle for legal technology: attorneys must understand “the benefits and risks associated with relevant technology.” Not avoid it. Understand it.
The hallucination epidemic is real and accelerating. But it is an argument for competence, not prohibition. An attorney who submits unverified AI citations committed the same supervisory failure as one who never checked a paralegal’s research. The tool did not fail the client. The attorney did.
The Ethics Framework: What the Rules Actually Require
ABA Formal Opinion 512, issued in July 2024, provides the profession’s first comprehensive ethics guidance on generative AI. The opinion does not prohibit AI use. It requires competent AI use. The distinction matters.
Opinion 512 addresses six areas of professional responsibility as applied to generative AI. On competence, the opinion reaffirms that Model Rule 1.1 requires attorneys to “understand the capabilities, limitations, and risks” of AI tools before deploying them in client matters. On confidentiality, it warns that inputting client information into AI systems that retain or learn from user data may violate Model Rule 1.6(c)’s requirement of “reasonable efforts to prevent the inadvertent or unauthorized disclosure” of client information. On communication, it requires attorneys to discuss AI use with clients when it materially affects the representation. On candor, it reminds attorneys that obligations under Rules 3.1 and 3.3 to tribunals are not diminished because AI generated the content. On supervision, it extends Rules 5.1 and 5.3 to require that attorneys supervise AI output the same way they supervise work product from associates and paralegals. On fees, it states that attorneys “who bill clients on an hourly basis must bill for actual time spent working”and may not charge for hours AI eliminated.
Read that framework carefully. Every obligation points in the same direction: learn the technology, implement safeguards, supervise output, take responsibility. None points toward prohibition.
Forty states, the District of Columbia, and Puerto Rico have now adopted Model Rule 1.1’s technology competence requirement in some form. The trend line is clear. The profession expects attorneys to engage with technology, not retreat from it. As the prior posts in this series have documented, from email encryption gaps to phone call recording risks to password security failures, the technology blind spot grows most dangerous when attorneys assume avoidance equals safety.
The Hallucination Epidemic: A Supervision Problem in Disguise
The cascade of sanctions for AI hallucinations makes compelling headlines. It does not make a compelling case for prohibition. Examine what actually happened in the landmark cases.
In Mata v. Avianca (S.D.N.Y. 2023), the case that launched national attention, attorney Steven Schwartz used ChatGPT to research a personal injury filing and submitted six citations to cases that did not exist. When the court flagged the issue, Schwartz asked ChatGPT to confirm the cases were real. It confirmed they were. He never checked a legal database. The court imposed a $5,000 fine, not for using AI, but for failing to verify the output and for affirmatively misleading the court about the citations’ authenticity.
In Noland v. Land of the Free (Cal. 2025), the court discovered that 21 of 23 quotations in a filing were fabricated, complete with invented page numbers and fictional holdings. The fine: $10,000. But the court went further, sanctioning opposing counsel $2,000 for failing to identify and report the fabricated citations. The message: every attorney in the courtroom has an obligation to catch this.
In Buchanan v. Vuori (C.D. Cal., Dec. 2025), AI-generated errors delayed a settlement and prompted the court to refer the matter to the Standing Committee on Professional Conduct. The settlement delay alone caused measurable harm to the client, independent of any fine.
Every one of these cases shares a common thread: the attorney treated AI output as a finished product. No independent verification. No cross-referencing against legal databases. No critical review of whether cited cases existed, whether quoted language matched actual holdings, whether the legal reasoning held together. This is not an AI problem. This is a supervision problem. If a first-year associate submitted a brief with six fabricated citations, no managing partner would blame the associate’s law school. They would blame the supervising attorney who signed the filing without reading it.
The same standard applies to AI. Formal Opinion 512 makes this explicit: attorneys must “independently verify the accuracy and adequacy of any AI-generated output before relying upon or submitting it.” The tool is the tool. The lawyer is the lawyer. When the lawyer stops being the lawyer, sanctions follow.
The Intelligence Paradox: When Expertise Becomes the Blind Spot
Attorneys are, by training and selection, among the most intellectually capable professionals in any room. Law school rewards analytical reasoning, pattern recognition, and the ability to master complex material quickly. These skills create a dangerous assumption when applied to technology: I’m smart enough to figure this out without training.
This is the intelligence paradox. The same cognitive abilities that make attorneys exceptional at legal analysis make them overconfident in domains where their expertise does not transfer. An attorney who would never advise a client on a patent prosecution without understanding the underlying technology will open ChatGPT, paste in a client’s case facts, and submit the output to a federal court without understanding the first thing about how large language models generate text, why they hallucinate, or what guardrails exist to prevent fabrication.
The Dunning-Kruger effect describes the phenomenon precisely: people with limited knowledge in a domain overestimate their competence in that domain. Attorneys typing prompts into a chat window feel competent because the interface is simple. The simplicity is deceptive. Behind that chat window sits a probabilistic text generation system trained on hundreds of billions of parameters, predicting the next most likely token in a sequence. It does not “know” law. It does not “research” cases. It generates statistically probable text that looks like legal research. The distinction between looking like legal research and being legal research is the distinction between competent representation and a $10,000 sanction.
Ego compounds the problem. Attorneys who have spent decades developing expertise resist the notion that they need training on a tool that a teenager can operate. The senior partner who refuses to attend a CLE on AI because “I already know how to use it” is the same partner who approved the filing with fabricated citations. Knowing how to type a prompt is not knowing how to use AI. Knowing how to use AI means understanding what the technology can do, what it cannot do, where it fails predictably, and what verification protocols prevent those failures from reaching a client or a court.
Consider the analogy to financial markets. Intelligent, accomplished professionals lose money in markets every day because they confuse intelligence in their own domain with competence in investing. Warren Buffett has observed that Wall Street is the only place where people who arrive in Rolls Royces take advice from people who arrive by subway. The legal profession’s relationship with AI exhibits the same dynamic: attorneys with decades of legal expertise taking output from a system they do not understand and staking their professional reputation on its accuracy.
The fix is not complicated, but it requires something attorneys rarely volunteer: admitting what they do not know. AI competence starts with acknowledging that a J.D. and thirty years of trial experience confer zero expertise in machine learning, natural language processing, or the architectural limitations of transformer models. That acknowledgment is not weakness. It is the first step toward the technological competence that Rule 1.1 requires.
The Beta Problem: You Are Using an Unfinished Product
Here is a fact that should recalibrate every attorney’s confidence in generative AI: Microsoft Word has existed for over 40 years. It has had four decades of development, billions of users providing feedback, and the resources of one of the world’s largest technology companies behind it. It still suggests incorrect word replacements. Routinely. Open any Word document, right-click a flagged word, and watch the software confidently recommend a replacement that changes the meaning of your sentence. A tool with 40 years of refinement cannot reliably handle basic English vocabulary.
Now do the math. If a mature, narrowly focused product cannot master everyday language after four decades, what should you expect from a generative AI system that has existed for roughly three years and attempts to handle every domain of human knowledge simultaneously? Legal language operates at a level of precision that makes ordinary English look forgiving. In a contract, “shall” imposes an obligation while “may” grants discretion. “Material” versus “substantial” can determine whether a breach triggers termination. “Notwithstanding” reverses the effect of every clause that precedes it. A single misplaced comma in a list of conditions has generated litigation worth millions of dollars.
Generative AI does not understand these distinctions. It predicts them statistically. When the training data contains enough examples of correct legal usage, the prediction is often right. When the training data is sparse, conflicting, or absent, the prediction fails. And it fails silently, with the same confidence it displays when it is correct. A large language model does not flag uncertainty the way a cautious associate might write “need to verify” in a margin note. It produces fabricated case citations in the same authoritative tone it uses to cite real ones.
The technology industry has a term for this stage of product development: beta. Beta software is functional enough to use but incomplete enough that failures should be expected and planned for. Every generative AI product on the market today, regardless of the marketing language surrounding it, operates in what any honest engineer would describe as an extended beta. The models improve with each iteration. The hallucination rates decline. The accuracy increases. But no major AI developer claims their product is ready for unsupervised deployment in high-stakes professional environments. Read the fine print on any AI platform’s terms of service: you will find disclaimers about accuracy, recommendations to verify output, and limitations of liability that should tell every attorney exactly how much the developer trusts its own product.
Attorneys would never submit a contract drafted by a first-year associate without review. They would never file a brief prepared by a paralegal without verification. They would never rely on a Westlaw annotation without reading the underlying case. Yet 600 attorneys have now submitted AI output to federal courts without performing the same basic quality control they apply to every other source of work product. The tool is not ready to operate unsupervised. The question is whether the attorney is competent enough to recognize that fact.
What Legal Actually Needs: AI Built for the Profession
The hallucination epidemic and the confidentiality exposure share a root cause: attorneys are using general-purpose AI tools for a specialized professional application. ChatGPT, Claude, Gemini, and their competitors were built to handle every topic from cooking recipes to quantum physics. Legal practice was not their design target. Legal accuracy is not their optimization metric. Attorney-client privilege is not baked into their architecture.
What the legal profession needs, and what the market has only begun to deliver, is AI developed with attorneys, for attorneys, that shares the liability when it gets the answer wrong.
That last element matters most. When a general-purpose AI tool generates a fabricated citation, the AI company bears no professional consequence. The attorney bears all of it: the sanction, the malpractice claim, the bar complaint, the reputational damage. This liability asymmetry creates perverse incentives. The AI developer optimizes for user engagement and subscription revenue. The attorney needs accuracy, privilege protection, and verifiable outputs. These objectives do not align, and no amount of prompt engineering fixes the misalignment.
A legal-specific AI platform should meet a different standard. First, it should be trained on and optimized for legal corpora: case law, statutes, regulations, secondary sources, and practice-specific materials. General-purpose models trained primarily on internet text inevitably reflect the internet’s casual relationship with legal precision. Second, it should enforce privilege-native architecture: data handling designed from the ground up to protect attorney-client confidentiality, with no training on client inputs, no data retention beyond the session, and processing that never routes through shared infrastructure accessible to other users. Third, and most critically, it should share the liability. A legal AI platform that stands behind its outputs, that accepts professional responsibility for accuracy rather than disclaiming it in paragraph 47 of its terms of service, changes the entire calculus. When the vendor has skin in the game, the engineering priorities shift from “generate plausible text” to “generate verifiable, accurate legal analysis.”
Humans programmed these systems. Every capability and every limitation reflects design choices made by engineers who, in most cases, have never practiced law, never managed privilege reviews, never faced a bar complaint. The systems reflect their creators’ priorities. General-purpose AI prioritizes breadth and fluency. Legal practice requires depth and precision. Until the tools reflect the profession’s requirements rather than the technology industry’s, attorneys who use them carry the full burden of bridging that gap through their own competence, verification, and professional judgment.
The Confidentiality Problem: Competence Before the First Prompt
Hallucinations capture headlines. The confidentiality exposure may prove more consequential.
When an attorney pastes client information into a consumer AI tool without understanding that tool’s data retention and training policies, the attorney has violated Model Rule 1.6(c) before the AI generates a single word. The confidentiality breach occurs at input, not output. This is not a technology failure. This is a competence failure.
Formal Opinion 512 addresses this directly, cautioning that attorneys must evaluate whether AI tools “retain or learn from the information inputted” and must obtain informed consent before entering confidential client data into systems that may use it for training. The opinion specifically identifies “self-learning” AI tools as requiring heightened attention because client data entered today may influence outputs generated for other users tomorrow.
Consider the practical implications. An employment attorney drafts a discrimination complaint using a consumer AI chatbot, pasting in client interview notes containing medical information, salary data, and the names of witnesses. The AI provider’s terms of service permit using input data to improve the model. That client’s sensitive information now resides on servers the attorney does not control, subject to data practices the attorney did not evaluate, potentially influencing outputs for other users. The attorney has created exactly the kind of unauthorized disclosure that Rule 1.6(c) prohibits.
The solution is not to avoid AI. The solution is to understand the data policies of the tools you use before you use them. Enterprise-grade AI platforms with appropriate data handling agreements differ fundamentally from consumer chatbots. Attorneys who understand this distinction can use AI effectively while protecting client confidences. Attorneys who do not understand it should not use AI tools for client work until they do. That is not a ban. That is competence.
The Case for Prohibition, and Why It Fails
The prohibition argument deserves fair engagement. Its strongest form runs as follows: generative AI tools hallucinate at rates that create unacceptable risk to clients and the judicial system. The profession cannot ensure adequate training and supervision at scale. The consequences of AI failures fall disproportionately on clients, who did not choose the tool and cannot evaluate its reliability. A prophylactic ban eliminates these risks while sacrificing only efficiency gains that primarily benefit attorneys’ bottom lines, not client outcomes.
This argument captures legitimate concerns. Hallucination rates remain material. Training is uneven. Clients bear disproportionate risk. Any honest assessment must acknowledge these realities.
The argument fails on three grounds.
First, prohibition does not eliminate the risk; it drives it underground. Attorneys who want to use AI will use it regardless of firm policy, the same way associates have always conducted preliminary research through channels that never appear in billing records. A prohibition removes the institutional framework for training, oversight, and quality control. It replaces visible, supervised AI use with invisible, unsupervised AI use. The outcomes get worse, not better.
Second, prohibition creates its own competence gap. An attorney who has never used AI tools cannot evaluate opposing counsel’s AI-generated work product. In the Noland case, the court sanctioned opposing counsel for failing to identify fabricated citations. Competence in an AI-saturated environment requires understanding how AI tools work, what their failure modes look like, and how to detect AI-generated errors. Prohibition produces attorneys who lack exactly this knowledge.
Third, prohibition conflicts with the trajectory of ethical obligations. Forty-two jurisdictions have adopted technology competence requirements. The ABA’s highest ethics authority issued a formal opinion providing a framework for responsible AI use, not an opinion prohibiting it. State ethics opinions, including Florida’s Ethics Opinion 24-1, have followed the same path: regulate use, do not ban it. An attorney who refuses to learn AI technology in 2025 occupies an increasingly difficult position under Rule 1.1’s requirement to stay abreast of technological changes affecting practice.
Practice-Specific Implications
Litigation: The hallucination cases concentrate overwhelmingly in litigation practice, where attorneys face citation requirements, candor obligations, and judicial scrutiny of every filing. Litigators must implement verification protocols for any AI-assisted research. At minimum, this means independently confirming every case citation in a legal database, verifying quoted language against actual opinions, and reviewing AI-generated legal analysis for logical coherence. The standard is not perfection. The standard is the same diligence you would apply to work product from any other source.
Corporate and M&A: AI tools that analyze contracts, conduct due diligence, and summarize transaction documents create efficiency gains that clients increasingly expect. The confidentiality exposure is acute: deal terms, material nonpublic information, and strategic objectives entered into AI tools become potential disclosure vectors. Enterprise AI platforms with appropriate data processing agreements, air-gapped environments, and no-training commitments represent the minimum standard for transactional AI use.
Criminal Defense: Defense attorneys face perhaps the highest stakes for AI errors. A fabricated citation in a habeas brief could cost a client their freedom. A confidentiality breach could expose defense strategy to prosecutors. Criminal defense practitioners should implement the most rigorous verification and data protection protocols, recognizing that the consequences of failure are measured in years of human liberty, not dollars.
Employment Law: As noted in the email privacy series, employment practitioners already navigate the minefield of client communications on employer-controlled systems. AI adds another dimension: an attorney using consumer AI to draft a discrimination complaint may be creating discoverable records of client information on third-party servers. The same communication security principles apply: sensitive client data requires secure channels, whether those channels carry email or AI prompts.
What to Do Monday Morning
First, read ABA Formal Opinion 512. The full opinion runs 22 pages. It is the foundational document for AI ethics in legal practice. Every attorney using AI, or supervising attorneys who use AI, should read it in its entirety. Reading a summary is not sufficient.
Second, audit your AI tools’ data practices. Review the terms of service and data processing agreements for every AI platform your firm uses. Determine whether client data is retained, used for training, or accessible to the provider’s employees. If the answers are unsatisfactory, switch to enterprise platforms with appropriate safeguards or stop using the tool for client work.
Third, implement a verification protocol. Every AI-generated work product that reaches a client or a tribunal must pass through independent human verification. For legal research, this means confirming citations in Westlaw or Lexis. For contract analysis, this means reviewing flagged provisions against the actual document. For any substantive output, this means applying the same professional judgment you would apply to a junior associate’s draft.
Fourth, document your AI policies. Create written protocols for AI use that specify approved tools, prohibited uses, verification requirements, and supervisory responsibilities. Documentation demonstrates reasonable efforts under Rule 1.6(c) and provides a defensible record if your practices are questioned.
Fifth, train everyone. Partners, associates, paralegals, and administrative staff. Model Rule 5.3 extends supervisory responsibility to nonlawyer assistants. A firm AI policy means nothing if the staff member who drafts the first version of a discovery response uses a consumer chatbot because no one told them not to.
Sixth, talk to your clients. Formal Opinion 512 requires attorneys to communicate with clients about AI use when it materially affects the representation. This conversation should happen at intake, documented in the engagement letter. Clients have a right to know how their information is being processed and what safeguards are in place.
Seventh, evaluate legal-specific AI platforms. General-purpose chatbots were not designed for legal work and their terms of service reflect that reality. Platforms built for the legal profession, with privilege-native architecture, legal-specific training data, and meaningful accountability for accuracy, reduce the competence burden on individual attorneys. The right tool does not eliminate the obligation to verify, but it changes the risk profile dramatically.
The License, Not the Ban
Most attorneys using AI today resemble a teenager behind the wheel of a race car. They have access to extraordinary power and almost no training in how to control it. The cars are getting faster every quarter. The tracks are getting more complex. The consequences of a crash fall on the client in the passenger seat.
The answer is not banning race cars. It never has been. The answer is requiring a license before turning the key.
Formal Opinion 512 provides the framework. Forty-two jurisdictions have adopted the technology competence requirement. The sanctions docket grows by two or three cases every day. The profession has all the guidance it needs. What it lacks is the discipline to implement it, and the humility to admit that a law degree does not confer expertise in machine learning.
Microsoft Word has had 40 years to master English vocabulary and still suggests the wrong word. Generative AI has had three years to master the entire corpus of human knowledge. The attorneys who understand that gap, who treat AI as a powerful but unfinished tool requiring rigorous supervision, will thrive. The attorneys who confuse a simple interface with a simple technology will join the growing list of sanctions orders that law students study as cautionary tales.
The attorneys in Alabama, Colorado, and California did not need less AI. They needed more competence. The distinction between those two things will define the next decade of legal practice.
This blog provides general information for educational purposes only and does not constitute legal advice. Consult qualified counsel for advice on specific situations.
About the Author
JD Morris is Co-Founder and COO of LexAxiom. With over 20 years of enterprise technology experience and credentials including an MLS from Texas A&M, MEng from George Washington University, and dual MBAs from Columbia Business School and Berkeley Haas, JD focuses on the intersection of legal technology, cybersecurity, and professional responsibility.
Connect: LinkedIn | X | Bluesky
References
ABA Model Rules of Professional Conduct, Rule 1.1, Comment 8 (Technology Competence, 2012)
ABA Model Rules of Professional Conduct, Rule 1.6(c) (Confidentiality, Reasonable Efforts)
ABA Model Rules of Professional Conduct, Rules 3.1, 3.3 (Candor to the Tribunal)
ABA Model Rules of Professional Conduct, Rules 5.1, 5.3 (Supervisory Responsibilities)
ABA Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512 (July 29, 2024): Generative Artificial Intelligence Tools
ABA Formal Opinion 477R (May 2017): Securing Communication of Protected Client Information
Florida Bar Ethics Opinion 24-1 (2024): Use of Generative Artificial Intelligence in the Practice of Law
Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. June 22, 2023) ($5,000 sanction for AI-fabricated citations)
Johnson v. Dunn, No. 2:23-cv-00628 (N.D. Ala. July 2025) (attorney disqualification for AI hallucinations)
In re MyPillow, Inc. Defamation Litigation (D. Colo. July 2025) ($3,000 sanctions per attorney, 24+ AI errors)
Noland v. Land of the Free, Inc. (Cal. Super. Ct. 2025) ($10,000 sanction; opposing counsel also sanctioned $2,000)
Buchanan v. Vuori, Inc. (C.D. Cal. Dec. 2025) (referral to Standing Committee on Professional Conduct)
Kruger, J. & Dunning, D. (1999). Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.
LawNext, Technology Competence Adoption Tracker (40+ jurisdictions as of 2025)
Thomson Reuters, AI Hallucination Cases Tracker (600+ documented instances, 2023-2025)