
THE TECHNOLOGY BLIND SPOT
In 2007, a close friend and former Intel process engineer got a call from his manager. Intel was building Fab 68 in Dalian, China. They needed someone who knew the fabrication process at the cellular level, someone who had spent years at the Santa Clara facility learning the intricate choreography of a 300-millimeter wafer line. He was that person. The trip would be temporary. He packed for six months.
He stayed through 2009. He trained and managed the Chinese operation, transferring two decades of accumulated process judgment, the kind that lives in hands and habits and the instinct to catch a problem before it becomes a yield loss, into a workforce earning roughly one-tenth of what he made in California. When he returned to the United States, Intel laid him off. The knowledge had arrived. He had not.
The company eventually rehired him as a contractor at less than half his prior rate. The expertise that made him valuable enough to send to China made him replaceable once China no longer needed a teacher. His marriage did not survive the financial and professional disruption that followed. The semiconductor brain drain he had personally carried across the Pacific rippled outward for years: the Intel Dalian facility grew into a $2.5 billion operation, the U.S. defense supply chain became dependent on Taiwan, South Korea, and China for the chips that run everything from smartphones to missile guidance systems, and a generation of American process engineers who built that knowledge base found themselves obsolete in the country that produced them.
That was 2007. The mechanism running in 2026 is faster, more precisely targeted, and operating inside the legal profession right now.
Katya’s story runs the same sequence on a shorter timeline. Years in content marketing ended when automation compressed the field faster than she could move through it. A LinkedIn post promised copywriting work at $45 an hour. She clicked, found herself at a page for a company called Mercor, and received instructions to interview on camera with an AI named Melvin. What Mercor wanted was not her copywriting skill. It wanted her expertise to teach a language model how to replicate that skill well enough that nobody would need to hire Katya again. She told investigators from The Verge and New York Magazine, whose joint investigation published this week, that the whole thing seemed like the sketchiest job offer she had ever seen. She took it anyway. She needed the work.
Lawyers occupy the same queue. Mercor pays them $110 to $130 an hour to draft legal questions, evaluate AI-generated legal responses, and structure domain-specific problems for model training. Physicians earn $130 to $170 an hour reviewing datasets for AI-assisted primary care tools. Scientists, historians, and bankers fill adjacent positions. The company is not a marginal player. Mercor raised $350 million in October 2025 at a $10 billion valuation, a fivefold increase since its prior raise in February of that year. Its 30,000 contractors collectively earn more than $1.5 million per day. The attorneys doing this work are not footnotes in a labor transition story. They are the infrastructure funding the next generation of legal AI.
The gig labor story is disturbing and visible. The part of the story that should concern managing partners is neither visible nor limited to displaced workers. Every law firm deploying AI tools today confronts the same extraction mechanic, running not through a gig platform but through the AI tools already on firm computers. The mine does not require a catastrophe to activate. It runs through routine practice.
The Market Behind the Mirror
Machine learning models require labeled, structured, human-generated data to improve. Early data labeling used crowdsourced workers to identify objects in images and categorize text. That approach cannot produce legal reasoning. The gap between what frontier AI legal tools can currently do and what attorneys can do reflects, in part, the gap between generic training data and expert professional judgment. Mercor’s business model closes that gap by recruiting the professionals AI is simultaneously displacing to supply the judgment the model lacks.
Mercor co-founder Adarsh Hiremath described the competitive opening that drove his company’s 2025 growth: a competitor, Scale AI, became compromised when Meta paid $14.3 billion for a 49% stake and its founder departed for the social media company. AI labs including Google and OpenAI reportedly cut ties with Scale AI over neutrality concerns. Mercor stepped into the void. It now counts the world’s top five AI labs among its clients and pays out more than $1.5 million daily for work that would have been ordinary professional practice five years ago. The attorneys who work for it do not call this a conflict of interest. They call it supplemental income.
Anthropic researchers published a working paper in early March 2026 that named the scenario candidly: a “Great Recession for white-collar workers.” The paper noted that a doubling of unemployment in the top quartile of AI-exposed occupations, from 3% to 6%, would be clearly detectable. It has not happened yet. The researchers were careful to say it absolutely could. Nobel economists Daron Acemoglu, Simon Johnson, and David Autor warned in a concurrent Hamilton Project paper that pure automation technologies “commodify human expertise, rendering it less valuable and potentially superfluous.” The legal profession is not exempt from that arithmetic. And the attorneys currently teaching AI to reason about law are not exempt from the logic my friend lived through in Dalian.
The Worker in the Middle
One worker quoted in the New York Magazine investigation described the experience directly: her job had been eliminated by AI, and she was then invited to train the model to perform a worse version of it. “It’s like being asked to dig your own grave,” a screenwriter in the same investigation said. The legal rubrics attorneys produce for Mercor, the structured chains of reasoning they encode, the case analysis frameworks they articulate for model evaluation, all become training signal. The model learns what good legal reasoning looks like by watching someone who spent years developing that judgment demonstrate it, one task at a time, at $120 an hour.
There is a professional responsibility dimension that the New York Magazine investigation did not explore and the profession has not resolved. ABA Model Rule 1.6 protects information relating to the representation of a client, and that protection does not expire when the representation ends or when the representing attorney loses her position. An attorney who encoded analysis from prior representations into an AI training rubric, even without naming the client or matter, may be disclosing information that Rule 1.6(c) requires reasonable efforts to protect. The ABA has not issued a formal opinion on AI training gig work by displaced attorneys. It should. The question is not whether the work pays. The question is what it discloses in the process.
My friend from Intel was not given a formal ethics opinion when his manager asked him to transfer three decades of process knowledge to a lower-cost workforce. He was given a plane ticket. The profession does not usually offer warnings before the mine opens. It offers retrospective analyses after the damage is measurable.
The Institutional Layer
Attorneys working gig queues represent the visible end of the knowledge transfer. The invisible end runs through active practice. When an attorney pastes a client intake summary into an AI drafting tool, the model processes more than a document. It processes the firm’s intake logic: how the firm frames cases, what risk language it applies, how it structures facts for maximum persuasive effect. Many AI vendors retain this input data. Some use it to improve their models. The terms governing this practice sit in vendor service agreements that most law firms signed without review by anyone who understood the implications.
Three categories of institutional knowledge face the clearest exposure. Client intake patterns capture which matters a firm accepts and how it assesses risk, information that competitive intelligence operations would pay for directly. Case strategy templates encode the legal arguments a firm returns to repeatedly across a practice area, the framing choices that distinguish its approach from competitors. Fee negotiation history, processed through AI billing tools, exposes the number below which the firm has historically agreed to go. None of this data carries a trade secret designation in the vendor agreement. All of it trains the next version of the tool.
[See “Your AI Tool Doesn’t Keep Secrets,” Morris Legal Technology Blog; “17 Subprocessors: The Hidden Risk in Your AI Vendor Stack,” Morris Legal Technology Blog.]
This mechanism runs parallel to the one private equity deployed when it began acquiring law firm operational infrastructure through the managed services organization structure. As documented on this blog in “The Private Equity Playbook” and “The Gathering Storm,” PE firms do not need to own a law firm to extract value from it. They need only own everything surrounding the practice: the marketing platform, the billing infrastructure, the technology stack. The MSO captures the operational margin while the law firm retains nominal ownership of its work.
AI training captures the cognitive layer through the same logic. The vendor does not need to own the firm’s legal judgment. It needs only to observe that judgment, repeatedly, at sufficient scale, to encode it. The firm retains its attorneys. The vendor retains the training data those attorneys generate. Over time, the distinction between those two assets narrows in ways most law firm partners have not yet considered. [See “The Private Equity Playbook: What Happened to Physicians Is Coming for Lawyers,” Morris Legal Technology Blog, November 2025.]
The Counterargument
A strong counterargument deserves direct engagement. AI training labor markets create income for professionals navigating a difficult market transition, at rates that frequently exceed first-year associate salaries, without the billable hour pressure. If AI tools genuinely improve legal reasoning, the profession gains a productivity multiplier that could expand total demand for legal services, a pattern documented in the Jevons Paradox analysis elsewhere on this blog. And the claim that AI vendors universally train on client data overstates what can be demonstrated. Enterprise-grade vendors with contractual data isolation commitments represent a materially different risk profile than consumer-grade tools with permissive training data clauses. Categorical alarm about all AI vendor relationships is not analysis.
The distinction that matters: most law firms today deploy both categories of tool without systematically distinguishing between them. The risk concentrates in that gap. The question is not whether all AI vendors extract institutional knowledge. The question is whether your firm knows which of its vendors do, and whether the attorneys using those tools have been told the difference.
Where the Argument Breaks Down
The causal chain from “attorney uses AI tool” to “firm’s strategy ends up shaping a competitor’s model” requires several steps that do not always connect. Vendors operating under genuine data isolation contracts, particularly those with attorney-specific confidentiality terms, interrupt the chain at the source. Even where training does occur on input data, demonstrating a specific competitive disadvantage attributable to that training requires a precision most firms cannot achieve. This argument is strongest as a risk management posture, not a proven mechanism of specific harm. Treat it accordingly: as a reason to review vendor agreements carefully, not as evidence that every AI interaction is already compromised.
What Changes by Thursday
Pull the data retention and training provisions from every AI vendor agreement currently in use at the firm. A standard review takes less than two hours for an associate familiar with commercial contracts. The questions are concrete. Does the vendor retain input data after the session ends? Under what terms? Does the vendor use customer data to train or improve its models? What controls exist on subprocessor access? If the agreement is silent, the answer defaults to permissive. Silence is not neutrality in vendor contract drafting. It is a choice that favors the vendor.
Firms that complete this review consistently find the same two things: their enterprise-grade tools carry better protections than assumed, and their attorneys’ consumer-grade AI usage is far more permissive than the firm realized. The risk lives in the gap between those two categories. That gap is where Mercor’s training data ultimately originates, not from gig workers who lost their jobs, but from the professionals still at their desks, using tools without reading what happens to the session after it closes.
I worked for Pat Gelsinger. His intelligence was genuine. His grasp of what the semiconductor industry had surrendered over two decades was precise and unsentimental. He lobbied harder than any CEO in America for the CHIPS Act, secured nearly $8 billion in federal grants for domestic manufacturing, and laid out a multi-year plan to rebuild what companies like Intel had spent a decade offshoring. His board forced him out on December 1, 2024, before the project could finish, concluding that his turnaround was not moving fast enough for investors who had grown accustomed to the returns that offshoring produced. The man who tried to plug the mine got extracted himself. Not because he was wrong about the problem. Because twenty years of extraction had created a market structure that one determined person could identify, could analyze, could fight to reverse, and still could not move alone.
Katya still works for Mercor. My friend from Intel reinvented himself in an industry that had no further use for what he spent his career building. The question managing partners should ask is not whether displaced professionals are teaching AI to replace them. That answer is running at $1.5 million per day. The question is whether your firm’s institutional knowledge is enrolled in the same process, one chat session at a time, through tools your attorneys use without thinking about what happens when the window closes.
The mine does not announce itself. By the time someone tries to reverse it, the knowledge is already gone.
About the Author
JD Morris is Co-Founder and COO of LexAxiom, an AI platform for the business of law. He holds a Master of Legal Studies from Texas A&M University School of Law, a Master of Engineering from George Washington University, and dual MBAs from Columbia Business School and UC Berkeley Haas. He writes the Morris Legal Technology Blog under the series banner “The Technology Blind Spot.” Connect with him on LinkedIn at www.linkedin.com/in/jdavidmorris, on X at @JDMorris_LTech, or on Bluesky at @JDMorris-ltech.bsky.social.
References
[1] Casey, Nora, and Nilay Patel. “You Could Be Next.” New York Magazine / The Verge. March 9, 2026.
[2] Mercor. “Unlocking Human Potential in the AI Economy.” Mercor Blog (Series C Announcement). October 27, 2025. mercor.com/blog/series-c/
[3] Temkin, Marina. “Mercor Quintuples Valuation to $10B with $350M Series C.” TechCrunch. October 27, 2025.
[4] Feiner, Lauren. “AI Startup Mercor Now Valued at $10 Billion with New $350 Million Funding Round.” CNBC. October 27, 2025.
[5] “Get Paid to Train AI: The New Side Hustle for Professionals.” Built In. (Attorney pay rates $110-$130/hour verified against Mercor rate disclosures.)
[6] Massenkoff, Maxim, and Peter McCrory. “Labor Market Impacts of AI: A New Measure and Early Evidence.” Anthropic Working Paper. March 2026. Cited in: Telford, Taylor. “Anthropic Just Mapped Out Which Jobs AI Could Potentially Replace.” Fortune. March 6, 2026.
[7] Acemoglu, Daron, Simon Johnson, and David Autor. “Building Pro-Worker Artificial Intelligence.” Hamilton Project Paper. March 2026.
[8] Intel Corporation. “Intel to Build 300mm Wafer Fabrication Facility in China.” Press Release. March 26, 2007. (Fab 68 Dalian: $2.5 billion investment; construction began September 2007; production commenced October 2010.)
[9] Al Jazeera. “Intel CEO Pat Gelsinger Forced Out in Surprise Move.” December 2, 2024.
[10] Fortune. “Intel CEO Forced Out After Board Grew Frustrated with Progress.” December 2, 2024. (CHIPS Act federal grant: $7.86 billion.)
[11] ABA Model Rules of Professional Conduct, Rule 1.6(c) (Confidentiality; Reasonable Efforts to Prevent Disclosure).
[12] ABA Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512 (July 29, 2024): Generative Artificial Intelligence Tools.
[13] Morris, JD. “Your AI Tool Doesn’t Keep Secrets.” Morris Legal Technology Blog.
[14] Morris, JD. “17 Subprocessors: The Hidden Risk in Your AI Vendor Stack.” Morris Legal Technology Blog.
[15] Morris, JD. “The Private Equity Playbook: What Happened to Physicians Is Coming for Lawyers.” Morris Legal Technology Blog. November 2025.
[16] Morris, JD. “The Gathering Storm.” Morris Legal Technology Blog.
[17] Morris, JD. “Jevons Paradox and Legal AI.” Morris Legal Technology Blog.