
THE TECHNOLOGY BLIND SPOT
The room got quiet. Angeline Chen, General Counsel at Progress Federal Solutions, had just finished walking a conference audience through the regulatory implications of AI adoption in federal contracting. During the Q&A, an attendee asked what she would tell attorneys who were still on the sidelines. Chen’s answer landed like a verdict: “If you can imagine being a lawyer today without knowing how to use email, that’s what it’s going to be like not understanding AI.”
A few nervous laughs. Then silence. Because everyone in that room remembered a senior partner, a respected mentor, a brilliant legal mind who had refused to adopt email until the world moved on without them. The analogy hit close to home precisely because the profession had lived through it once already.
According to the 2024 ABA Legal Technology Survey, 79% of attorneys now report using AI tools in their practice. ChatGPT reached 100 million users within two months of launch, faster than any technology in history. Attorneys, researchers, and knowledge workers drove a disproportionate share of that adoption. The curve looks less like a gentle slope and more like a cliff: from experimental curiosity to everyday reliance in under two years.
The ethical rules have kept pace. The profession has not.
The Direct Answer
AI competence is no longer optional. It is an ethical obligation under the Model Rules of Professional Conduct, and attorneys who fail to develop baseline AI literacy face the same professional exposure as those who refused to learn email, electronic filing, or basic cybersecurity.
This is Part 1 of a two-part series. This installment makes the case for why AI literacy belongs in the competence conversation. Part 2 examines the ten most dangerous problems attorneys face when using AI tools and provides a practical field guide for avoiding them.
The Ethics Framework
The competence obligation did not appear overnight. It built through a decade of incremental guidance that most practitioners ignored until the consequences arrived.
The foundation is ABA Model Rule 1.1, Comment 8, which requires attorneys to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” Forty states have adopted this language or equivalent provisions. The duty sits inside the competence rule, the most fundamental obligation in professional responsibility. Not aspirational. Not advisory. Mandatory.
Then came the AI-specific guidance. In July 2024, the ABA issued Formal Opinion 512, addressing generative AI directly for the first time. The opinion confirmed that attorneys “who bill clients on an hourly basis must bill for actual time spent working” and must “account for efficiencies when charging clients flat fees.” More critically, it established a verification mandate: AI is a tool, not a co-counsel. The human remains responsible for every output.
The Florida Bar sharpened the point further with Ethics Opinion 24-1: “A lawyer may not ethically engage in any billing practices that duplicate charges or that falsely inflate the lawyer’s billable hours. Though generative AI programs may make a lawyer’s work more efficient, this increase in efficiency must not result in falsely inflated claims of time.”
The regulatory trajectory is accelerating. California, New York, and Florida have issued guidance on AI supervision duties. More than 25 federal judges have issued standing orders requiring AI disclosure in court filings. The Illinois Supreme Court adopted a formal AI policy effective January 1, 2025. Each new opinion, each new standing order, tightens the standard. Attorneys reading these developments five years from now will wonder how anyone claimed ignorance in 2025.
The Sanctions Are Real
In June 2023, New York attorneys Steven Schwartz and Peter LoDuca filed a brief in Mata v. Avianca containing six fabricated case citations generated by ChatGPT. Not one of the cited cases existed. Federal Judge P. Kevin Castel sanctioned both attorneys and their firm $5,000, noting that the fabricated cases “caused opposing counsel and the Court to spend time on a wild goose chase.” The sanctions order named both attorneys by name and circulated across the legal profession within hours. Within days, it appeared in CLE materials nationwide.
That case opened the floodgates. French researcher Damien Charlotin maintains a database of judicial decisions involving AI-generated hallucinations. His count reached 684 cases by December 2025, up from 380 just two months earlier. The acceleration tells the story: courts encounter AI errors at an increasing rate even as awareness of the problem grows.
The consequences have escalated well beyond fines. A Denver attorney who texted his paralegal admitting he had not checked ChatGPT’s work product accepted a 90-day suspension. In South Florida, a judge ordered an attorney to attach a copy of the sanctions order to every complaint filed for the next two years. In Maryland, the Appellate Court held that “the failure to use AI responsibly in legal research raises ethical issues and can result in sanctions when used improperly” and referred counsel to the Attorney Grievance Commission.
The pattern is consistent and the lesson is simple: courts treat AI errors identically to any other failure of professional competence. The tool that generated the error does not reduce the attorney’s responsibility. It amplifies it.
What the Data Actually Shows
The promise of AI in legal practice collides with uncomfortable performance data.
Stanford RegLab tested the major legal AI platforms in 2024 with over 200 legal queries. Lexis+ AI produced incorrect information more than 17% of the time. Westlaw’s AI-Assisted Research hallucinated at 33%. Thomson Reuters’ Ask Practical Law AI provided accurate responses only 18% of the time. General-purpose tools fared worse: GPT-4 hallucinated between 58% and 88% of the time on legal queries. These are not edge cases from adversarial testing. These are baseline performance numbers from the best available platforms.
An NBER working paper published in May 2025 studied AI chatbot adoption across 7,000 workplaces in Denmark and found “no significant impact on earnings or recorded hours in any occupation,” including legal professionals. The average time savings across all professions came to roughly 3%. Many users reinvested that time correcting AI-generated errors, producing a net productivity gain near zero.
These numbers do not mean AI is useless. They mean the current generation of tools requires substantial human oversight to deliver value. The attorney who treats AI as a research assistant requiring supervision will outperform the one who treats it as a finished-product generator. That skill gap will widen as the tools mature. The question is which side of the gap you occupy.
The Counter Argument
The skeptic’s position carries genuine weight. AI tools remain unreliable for legal work: hallucination rates above 17% on the best legal-specific platforms, inconsistent outputs from identical prompts, context window limitations that cause tools to lose track of case details across long documents, and confidentiality risks when uploading client information to third-party platforms. An attorney who avoids AI until these tools mature may be exercising sound judgment, not technological ignorance.
This objection survives scrutiny under Comment 8 itself. The duty requires awareness of both benefits and risks. An attorney who understands why ChatGPT hallucinates, who can explain the privilege implications of uploading client data to a public AI platform, who knows the difference between a context window and a training data cutoff, and who has made a deliberate decision to limit AI use based on current capabilities has satisfied the competence obligation. Blanket adoption is not the standard. Informed professional judgment is.
The attorney who fails the competence test is not the one who declines to use AI. It is the one who cannot explain what these tools do, how they fail, or why the distinction matters. Ignorance, not caution, is the violation.
Practice-Specific Implications
Litigation: Every citation AI generates requires independent verification through Shepard’s or KeyCite. AI does not check whether cases remain good law, and it fabricates holdings from real cases as readily as it invents fictional ones. The verification burden falls entirely on the attorney who signs the filing. Nick Tiger, Associate General Counsel at Pearl.com, captured the client perspective: “When I see cases of outside counsel phoning it in, it’s very concerning. What else is being phoned in using AI?”
Corporate and M&A: AI tools can accelerate due diligence document review, but context window limitations cause them to lose track of deal terms across long document sets. Harvey AI’s prompt limit drops from 100,000 to 4,000 characters upon document upload, a 96% reduction that forces attorneys to break complex queries into segments that may miss cross-document connections. Material nonpublic information uploaded to a public AI platform creates securities law exposure on top of the confidentiality breach.
Criminal Defense: Uploading client communications to a public AI platform like ChatGPT may constitute third-party disclosure, potentially destroying privilege. Sam Altman acknowledged in July 2025 that OpenAI has not “figured out”how to handle legal privilege and confidentiality within ChatGPT. Enterprise AI agreements with no-training clauses offer stronger protection, but attorneys must verify the specific terms of each platform before trusting them with client information.
Family Law and Estate Planning: AI-generated documents require line-by-line review for jurisdictional accuracy. AI models train on legal text from every state simultaneously and frequently muddy distinctions between state-specific rules, generating arguments appropriate for the wrong jurisdiction. An estate plan drafted with AI assistance that applies another state’s formalities creates liability that surfaces years later, when the testator can no longer clarify intent.
Practical Steps
First, learn what AI can and cannot do. You do not need to become a data scientist. You need to understand hallucination risk, context window limits, training data cutoffs, and the confidentiality implications of uploading client information to different platforms. Part 2 of this series provides the specific field guide.
Second, develop a firm AI use policy. Only 10% of firms that use AI have adopted formal policies governing its use, according to the 2024 ABA survey. That number should alarm every managing partner. Your policy should address which tools attorneys may use, what client information may be uploaded, verification requirements for AI-generated work product, and disclosure obligations to clients and courts.
Third, obtain client consent before using AI on their matters. ABA Formal Opinion 512 emphasizes that attorneys should inform clients about AI use, particularly when it involves processing client information through third-party platforms. Include AI use provisions in engagement letters. The conversation may feel awkward now. It will feel far more awkward during a malpractice deposition.
Fourth, document your technology decisions. Whether you adopt AI tools or decline to use them, the key is demonstrating informed professional judgment. If your security assessment concludes that current AI tools pose unacceptable risks for certain matter types, document that analysis. The documentation itself satisfies much of the competence obligation. Conversely, undocumented defaults to convenience satisfy none of it.
The Competence Conversation Has Moved
Chen’s email analogy resonates because the profession lived through that exact transition. Attorneys who refused to adopt email in 2005 did not face immediate sanctions. They faced something quieter and more consequential: a growing inability to serve clients effectively, communicate with courts efficiently, or maintain competitive practices. The ethical obligation formalized what the market had already decided.
AI is following the same trajectory, compressed into a fraction of the time. The competence question is no longer whether to use AI. It is whether you understand it well enough to make informed decisions about when to use it, when to avoid it, and how to verify its output when you do.
Part 2 of this series examines the ten most dangerous problems attorneys encounter with AI tools. The competence obligation requires understanding both the promise and the peril.
Your clients expect you to understand the tools reshaping your profession. The ethical rules now require it. The courts will enforce it. The only remaining question is whether you develop that understanding on your own terms or on a disciplinary committee’s timeline.
This blog provides general information for educational purposes only and does not constitute legal advice. Consult qualified counsel for advice on specific situations.
About the Author
JD Morris is Co-Founder and COO of LexAxiom. With over 20 years of enterprise technology experience and credentials including an MLS from Texas A&M, MEng from George Washington University, and dual MBAs from Columbia Business School and Berkeley Haas, JD focuses on the intersection of legal technology, cybersecurity, and professional responsibility.
Connect: LinkedIn | X | Bluesky
References
ABA Model Rules of Professional Conduct, Rule 1.1, Comment 8 (Technology Competence)
ABA Model Rules of Professional Conduct, Rule 1.6(c) (Reasonable Efforts)
ABA Formal Opinion 512 (July 2024): Generative Artificial Intelligence Tools
ABA 2024 Legal Technology Survey Report
Florida Bar Ethics Opinion 24-1: Use of Generative Artificial Intelligence in the Practice of Law (2024)
Illinois Supreme Court Policy on Use of Artificial Intelligence (effective January 1, 2025)
Stanford RegLab / HAI, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries” (2024)
Humlum, A. & Vestergaard, E., “Large Language Models, Small Labor Market Effects,” NBER Working Paper No. 33777 (May 2025)
Charlotin, D., AI Hallucinations in Judicial Decisions Database (December 2025): 684 documented cases
Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023): $5,000 sanctions for fabricated AI citations
Chukwuemeka Mezu v. Kristen Mezu, Appellate Court of Maryland (2025): AI misuse referral to Attorney Grievance Commission
Justia/Verdict, “AI’s Limitations in the Practice of Law” (August 2025): Context window analysis
Chen, Angeline, General Counsel, Progress Federal Solutions (2025 remarks on AI competence)
Tiger, Nick, Associate General Counsel, Pearl.com (2025 remarks on AI verification)
Altman, Sam (July 2025): Remarks on AI privilege and confidentiality limitations
Prior Blog: “Your Password Is the Weakest Link in Your Security Chain” (Morris Legal Technology Blog)
Prior Blog: “Why Hackers Target Law Firms: Where All the Secrets Are Buried” (Morris Legal Technology Blog)
Prior Blog: “The FBI Says Stop Texting. Here’s the Privilege Problem Nobody’s Discussing.” (Morris Legal Technology Blog)
Prior Blog: “The Email Privacy Illusion” Parts 1-3 (Morris Legal Technology Blog)