
Technology Blind Spot Series
On January 15, 2026, Sen. John Curtis pressed four expert witnesses at the Senate Commerce Committee hearing on youth and technology with a rapid-fire sequence of questions. Are social media algorithms designed to be addictive? Do companies track user behavior to adjust those algorithms? Do current business models prioritize profit over wellbeing? Each witness answered yes. Curtis turned to the committee: “I’ve actually compared this to the tobacco hearings, where executives were brought up and they had data, they had research that it was harmful, and tried to tell us it was.”
Three weeks later, on February 10, the Harvard Business Review published the findings of an eight-month UC Berkeley study tracking how generative AI changed work habits at a 200-person technology company. The researchers expected to document time savings. They found the opposite. Employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day. AI did not reduce work. It intensified it.
These two events appear unrelated. They are not. The pattern Congress documented over eight years of social media hearings, that technology promising efficiency delivers intensification, that regulatory response trails the harm by half a decade, that the companies building the tools bear no liability for the consequences, is now replicating in legal technology adoption. The legal profession is watching the same movie, and most attorneys still believe the ending will be different.
The Direct Answer
AI will not give attorneys their hours back. It will convert those hours into more iterations, thicker documents, and higher client expectations, while creating an entirely new category of supervisory labor that no one is budgeting for. The profession that fails to recognize this pattern will repeat the social media regulatory playbook: a decade of hearings, a growing docket of harms, and structural reform that arrives too late to prevent the damage.
The NBER working paper studying 7,000 workplaces found that AI chatbots produced no statistically significant impact on hours or wages in the legal profession. Average time savings: roughly 3%. Many users spent the saved time correcting errors, netting close to zero productivity gain. The HBR study documented the mechanism: AI accelerated certain tasks, which raised expectations for speed, which increased AI reliance, which widened the scope of what workers attempted, which expanded total workload. The researchers called it a self-reinforcing cycle. Economists have a more precise name: the Jevons Paradox.
The Paradox That Explains Everything
In 1865, William Stanley Jevons observed that James Watt’s improvements to the steam engine did not reduce England’s coal consumption. They increased it. More efficient engines made coal-powered production cheaper, which made more production economically viable, which consumed more coal than the less efficient engines ever had. The resource that became cheaper to use did not become less used. It became more used.
The pattern has replicated in every efficiency revolution since. Better roads did not reduce driving. They induced demand for more driving. Faster internet did not reduce time online. Americans now spend an average of seven hours per day on screens. More efficient social media algorithms did not reduce scrolling. They optimized for engagement until teens reported spending eight hours per day on platforms they described as addictive, platforms whose own internal research documented the harm.
AI in legal practice follows the identical trajectory. Reid Hoffman, who co-founded LinkedIn and invested in multiple AI companies, articulated the legal-specific version on his Possible podcast in February 2026: “Both sides are going to be using AI to generate, analyze, suggest clauses, read clauses. And it’s going to be just a lot thicker. And so it isn’t like, lawyers’ jobs are going away.” When both parties in a negotiation deploy AI to identify risks, propose clauses, and stress-test provisions, neither side accepts a thinner contract. Both sides accept a more comprehensive one. The time that AI saves on drafting converts directly into more iterations of review.
Hoffman’s observation aligns with what the Leverage Trap series documented: when a firm charges by the hour, efficiency improvements do not automatically translate to fewer billable hours. An Association of Corporate Counsel survey found that nearly 60% of in-house counsel have seen no noticeable savings from outside counsel’s AI use. Only 13% reported fewer billable hours. The saved time gets reallocated, not returned.
The Social Media Playbook: Four Patterns the Profession Should Recognize
Pattern 1: Extraction and Escalation. Social media promised efficient connection. It delivered algorithmic addiction that extracted increasing amounts of time from users while forcing competitors to match the escalation or lose market share. Meta’s own internal research found teens felt addicted to platforms and knew the content harmed their mental health but could not stop. Internal documents produced in the February 2026 LA addiction trial showed Meta engineers joking that they functioned as “drug pushers,” comparing their product to gambling and Big Tobacco. The January 2026 hearing witnesses confirmed unanimously that the addictive design was intentional. Each platform optimized for engagement, which forced every competitor to optimize harder, which extracted more time from users at every cycle.
AI tools in legal practice replicate both the extraction and the escalation simultaneously. Tool vendors optimize for adoption metrics, subscription renewals, and feature engagement. No major AI vendor optimizes for whether attorneys are practicing more competently or clients are receiving better outcomes. The HBR researchers found that workers “did more because AI made ‘doing more’ feel possible, accessible, and in many cases intrinsically rewarding.” Meanwhile, ABA Formal Opinion 512 established that attorneys must understand AI tool capabilities and limitations. When one side deploys AI for contract analysis, opposing counsel cannot ignore the capability gap without risking competence questions under Model Rule 1.1. The competitive pressure ratchets in one direction. The Leverage Trap series documented this in the billing context: firms that adopt AI honestly cannibalize their own revenue under hourly models, firms that resist lose competitive position, and firms that bill for phantom hours risk professional discipline. Each path intensifies rather than reduces the pressure.
Pattern 2: The Verification Tax Nobody Budgets For. Social media platforms promised self-regulation through content moderation. That moderation created an entirely new cost center, human reviewers, AI classifiers, appeals processes, that the platforms systematically underfunded. The testimony at the January 2026 hearing confirmed companies suppressed their own research documenting harm, precisely because addressing the findings would have required investment that reduced engagement metrics.
AI in legal practice creates the same hidden labor layer. Every AI-generated draft requires verification. Every research memo requires citation checking. Every contract analysis requires human review against the actual document. The HBR researchers identified this as supervisory labor “almost entirely absent from corporate AI strategies.” In the legal context, the prior blog in this series on AI competence identified the same gap: the tool saves time on the first draft and costs time on verification. Attorneys who skip verification save time and assume all the risk. The net productivity gain after honest verification approaches zero, exactly what the NBER study measured.
Pattern 3: The Liability Asymmetry. Social media platforms operated under Section 230 immunity for years, bearing no liability for harms their algorithms amplified. Sen. Curtis’s bill now targets this directly: making platforms liable when algorithms knowingly amplify harmful content. The bill proposes that platforms must “own” the content their algorithms surface, allowing individuals to sue directly.
AI legal tools operate under a nearly identical liability structure. When ChatGPT generates a fabricated citation, OpenAI bears no professional consequence. The attorney bears all of it: the sanction, the malpractice claim, the bar complaint, the reputational damage. The sanctions docket now exceeds 684 documented cases of AI hallucinations in court proceedings as of December 2025. In Mata v. Avianca, the attorneys paid $5,000 in sanctions. In Noland v. Land of the Free, the sanction reached $10,000. In Buchanan v. Vuori, the court referred the attorney to the Standing Committee on Professional Conduct. The AI vendors in every case faced zero professional consequences.
Pattern 4: Regulatory Lag as Structural Feature. Congress held its first substantive social media hearing in 2018. KOSA passed the Senate 91-3 in 2024 and never reached the House. The first bellwether addiction trial began in February 2026. Eight years from hearings to trial, zero comprehensive federal legislation.
The legal profession’s AI regulatory timeline is tracking the same curve. The ABA issued Formal Opinion 512 in July 2024. Forty-two jurisdictions adopted technology competence requirements. State bars continue issuing advisory opinions. Enforcement remains inconsistent, implementation lags behind adoption, and disciplinary proceedings for AI misuse remain rare despite a sanctions docket growing by cases weekly. The profession is in the “identified the harm but hasn’t done anything structural” phase, the same position Congress occupied with social media between 2020 and 2022.
The Strongest Counterargument: Intensification as Feature
The steelmanned defense of AI-driven work intensification in legal practice runs as follows: if AI enables more thorough contract review, more comprehensive discovery analysis, faster iteration on client strategy, and broader identification of risks, then clients receive better representation. Busier attorneys producing higher-quality work is a feature, not a bug. The competitive dynamics that Hoffman described produce better outcomes for the parties that matter most: the clients.
This argument has merit. A contract that covers more corner cases protects the client better than a thinner one. Discovery review that catches more responsive documents serves justice more effectively. Research that identifies more relevant authority strengthens the brief. If AI shifts the profession from “adequate” to “comprehensive” while maintaining the same cost to the client, that is genuine progress.
The flaw lies in the assumption that intensification produces quality rather than volume. The social media version of this argument, that more engagement means better connection, turned out to be false. The engagement was addictive rather than beneficial. The HBR researchers flagged the identical risk for AI: “What looks like higher productivity in the short run can mask silent workload creep and growing cognitive strain.” Over time, overwork impairs judgment, increases error rates, and makes it harder to distinguish genuine productivity gains from unsustainable intensity. For attorneys, impaired judgment in a professional context does not produce better representation. It produces malpractice risk.
The profession’s current incentive structures do not distinguish between comprehensive analysis and productivity theater. A partner who sees associates producing more drafts at faster speeds has no metric for whether the ninth iteration improved the contract or merely added clauses no one will read. A firm that reports higher utilization rates to its management committee cannot distinguish AI-assisted quality from AI-assisted churn. Until the profession builds measurement systems that separate value from volume, the intensification will produce the same diminishing returns the social media platforms documented in their own suppressed research.
Practice-Specific Implications
Corporate and M&A Practices: The contract arms race Hoffman describes will hit transactional practices first. Due diligence review already faces commoditization pressure from AI tools. The Jevons Paradox predicts that AI-assisted due diligence will not produce faster closings. It will produce more exhaustive diligence at the same pace, with the additional verification burden falling on associates who must now confirm AI-flagged issues against source documents. Firms billing hourly for this work face the ethical bind documented in Escaping the Leverage Trap: bill for the original pace and risk overbilling, or bill for the actual reduced time and cannibalize revenue.
Litigation Practices: Discovery review has already experienced one round of technology-driven intensification through technology-assisted review. AI adds a second layer. The competitive pressure to deploy AI for brief writing, motion drafting, and case strategy analysis will mirror the contract dynamic: both sides escalate capabilities, neither side reduces effort, and the verification tax on AI-generated legal research adds a new labor category that did not exist two years ago.
Small and Mid-Size Firms: The firms documented in Parts 5 and 6 of the Leverage Trap series face a compounding problem. They lack the infrastructure to manage the verification burden at scale. A five-attorney firm that deploys AI for research and drafting must still allocate attorney time to verify every output. Without the associate leverage of larger firms, the verification tax falls on the same attorneys producing the work, eliminating the theoretical efficiency gain entirely. The platforms keep growing: LegalZoom has helped form over 4 million businesses, Rocket Lawyer reports nearly 30 million registered customers, and both continue expanding AI-powered features that absorb standardized work.
What to Do Tomorrow Morning
First, audit your actual time savings. Track the total time from AI prompt to verified final product, not just the drafting phase. Include verification, revision, and error correction. The net figure, not the gross, represents your actual productivity gain. If verification consumes the drafting savings, you know the Jevons Paradox is operating in your practice.
Second, separate quality from volume. Establish metrics that distinguish between more output and better output. A contract with 40 additional clauses is not inherently superior to one with 25 well-drafted provisions. A brief citing 30 authorities is not stronger than one citing the 12 most relevant. One concrete measure: track the ratio of AI-generated content that survives partner review unchanged versus content that requires substantive revision. If 60% or more of AI-assisted output needs reworking before it reaches the client, the tool is generating volume, not quality. Build review processes that evaluate whether AI-assisted work product serves the client’s interests or merely fills pages.
Third, budget for the verification tax. Every AI adoption plan should include a line item for supervisory labor. If your firm projects 20% time savings from AI-assisted drafting, allocate at least half that savings to verification. This is not pessimism. It is what the NBER data, the HBR study, and 684 documented hallucination cases predict.
Fourth, learn from social media’s regulatory failure. Do not wait for bar associations to mandate specific AI safeguards. By the time comprehensive regulation arrives, the profession will already have a decade of harms on the record. Implement verification protocols, document AI policies, and train every attorney now. Formal Opinion 512 provides the framework. The discipline to implement it is the firm’s responsibility.
The Pattern Recognition Test
Sen. Curtis told the Commerce Committee he believes a future hearing will force social media executives to admit they knew their products caused harm and suppressed the evidence, just as tobacco executives did in the 1990s. The internal documents from the LA addiction trial, where Meta engineers joked about being “drug pushers,” suggest that hearing may arrive sooner than the senator expects.
The legal profession does not need to wait for its own version of that hearing. The attorneys who recognize the pattern have a window to build practices that treat AI as a tool with a verification cost, not an efficiency miracle with no strings attached. The attorneys who mistake intensification for productivity will discover what social media users, congressional witnesses, and the 1,600 plaintiffs in the LA Superior Court already know: the hours do not come back.
This blog provides general information for educational purposes only and does not constitute legal advice. Consult qualified counsel for advice on specific situations.
About the Author
JD Morris is Co-Founder and COO of LexAxiom. With over 20 years of enterprise technology experience and credentials including an MLS from Texas A&M, MEng from George Washington University, and dual MBAs from Columbia Business School and Berkeley Haas, JD focuses on the intersection of legal technology, cybersecurity, and professional responsibility.
Connect: LinkedIn | X | Bluesky
References
ABA Model Rules of Professional Conduct, Rule 1.1, Comment 8 (Technology Competence)
ABA Formal Opinion 512 (July 2024): Generative Artificial Intelligence Tools
Association of Corporate Counsel, 2025 Chief Legal Officers Survey (AI adoption and outside counsel savings data)
Buchanan v. Vuori, Inc. (C.D. Cal. Dec. 2025) (referral to Standing Committee on Professional Conduct)
Charlotin, D., AI Hallucinations in Judicial Decisions Database (December 2025): 684 documented cases
Curtis, Sen. John, Remarks at Senate Commerce Committee Hearing, “Plugged Out: Examining the Impact of Technology on America’s Youth” (January 15, 2026)
Hoffman, Reid and Aria Finger, “Does AI Really Save Time?” Possible Podcast, Riff 045 (February 2026)
Humlum, A. & Vestergaard, E., “Large Language Models, Small Labor Market Effects,” NBER Working Paper No. 33777 (May 2025)
Jevons, William Stanley, The Coal Question (1865)
K.G.M. v. Meta Platforms, Inc. et al., Los Angeles Superior Court (February 2026) (first social media addiction bellwether trial)
Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023): $5,000 sanctions for fabricated AI citations
Meta Platforms, Inc., Internal research documents produced in discovery, Knight-Georgetown Institute analysis (2026)
Morris, JD. “AI Won’t Take Your Job. The Attorney Who Uses It Better Will.” Parts 1-2 (Morris Legal Technology Blog)
Morris, JD. “Escaping the Leverage Trap” Parts 1-6 (Morris Legal Technology Blog)
Morris, JD. “The Two Disruptions Nobody Is Discussing” (Morris Legal Technology Blog)
Noland v. Land of the Free, Inc. (Cal. Super. Ct. 2025) ($10,000 sanction for AI-generated errors)
Ranganathan, Aruna and Xingqi Maggie Ye, “AI Doesn’t Reduce Work — It Intensifies It,” Harvard Business Review (February 10, 2026)
S. 626, 119th Congress (2025-2026): SOCIAL MEDIA Act
Sokolove Law, “Social Media Addiction Statistics 2026” (citing Pew Research, Gallup, and National Library of Medicine data)
U.S. Senate Committee on Commerce, Science, and Transportation, “Plugged Out: Examining the Impact of Technology on America’s Youth” hearing record (January 15, 2026)