
How Deepfake Wire Fraud Turns Trust Accounts Into the Legal Profession’s Biggest Liability
THE TECHNOLOGY BLIND SPOT
In March 2025, a Pennsylvania estate lost $442,600 in a single afternoon. The money, proceeds from the sale of a late sheriff’s deputy’s house in Florida, sat in a law firm’s trust account awaiting distribution to the family. A cybercriminal spoofed the email of an attorney at one of the firms handling the transaction and sent revised wiring instructions to the attorney at the other firm. That attorney transferred the funds. By the time anyone checked with the real sender, the money had vanished into accounts controlled by the attacker.
The family filed suit against both law firms. DeLuca et al. v. SutterWilliams LLC et al. landed in court as a negligence and legal malpractice claim. The allegation: the attorney who executed the wire failed to verify the instructions through an independent channel before releasing client funds. The firms now face liability not because their legal work failed, but because their cybersecurity practices did.
Deepfake voice cloning and AI-generated video have rendered the primary defense attorneys rely on for wire verification, the confirmation phone call, fundamentally unreliable. Law firms that continue to treat voice-based callback procedures as sufficient protection for trust account disbursements face malpractice exposure, bar discipline under Model Rules 1.1, 1.6, and 1.15, and insurance gaps that leave them personally liable for client losses.
The Confirmation Call Is Dead
For two decades, the standard defense against fraudulent wire instructions has been a phone call. Receive wiring details by email, pick up the phone, call the sender at a known number, and confirm. Bar associations recommended it. Malpractice insurers required it. Title companies mandated it.
That defense assumed the human voice could not be faked in real time. That assumption collapsed in 2024.
In February of that year, a finance worker at Arup, a multinational engineering firm, joined a video call with the company’s CFO and several senior executives to discuss a confidential transaction. Every face on the screen matched a real person. Every voice sounded exactly right. The finance worker authorized 15 transfers totaling $25 million to accounts in Hong Kong. None of the people on the call were real. Arup’s Chief Information Officer later confirmed that no systems had suffered a compromise and no data loss occurred. The firm’s employees had simply believed they were conducting genuine transactions with colleagues they recognized by face and voice.
By March 2025, the pattern repeated in Singapore. A finance director joined what appeared to be a routine Zoom call with the CFO and other executives. After discussing an urgent acquisition, the director authorized a $499,000 transfer. Every participant on the call turned out to be a deepfake.
The technology that enabled these attacks now costs less than a streaming subscription. A 2024 McAfee study found that 1 in 4 adults had already encountered an AI voice scam. Voice cloning requires as little as three seconds of audio from an earnings call, a conference panel, or a podcast interview. A 2025 study by Barrington and Farid published in Scientific Reports found that listeners correctly identified AI-generated voices only about 60% of the time, barely better than a coin flip.
The Numbers Behind the Threat
The FBI’s Internet Crime Complaint Center reported $2.77 billion in business email compromise losses across 21,442 complaints in 2024, making BEC the second most financially devastating category of cybercrime. Total reported losses reached $16.6 billion, a 33% increase over 2023. Between 2022 and 2024, BEC alone accounted for $8.5 billion in cumulative losses.
Real estate transactions, the single largest source of law firm wire transfers, bore a disproportionate share. CertifID’s 2025 State of Wire Fraud report estimated $500 million in real estate wire fraud losses for 2024. Seventeen percent of title companies surveyed reported sending client money to fraudulent accounts. Only 19% of firms that suffered losses recovered all of the stolen funds.
Deepfake attacks represent the newest and fastest-growing vector within these losses. Global losses from deepfake-enabled fraud exceeded $200 million in the first quarter of 2025 alone. A Deloitte survey found that more than 1 in 4 executives reported their organizations had experienced at least one deepfake incident in 2024, with 50% expecting attacks to increase. CFO Magazine reported that 92% of companies had experienced financial loss attributable to a deepfake.
The Ethics Framework: Three Rules, One Obligation
Model Rule 1.15 (Safekeeping Property) requires attorneys to hold client funds separate from personal funds and to maintain complete records of all trust account transactions. Comment 1 specifies that a lawyer “must hold that property separate from the lawyer’s own property” and “should maintain on a current basis books and records in accordance with generally accepted accounting practice.” The rule imposes strict liability for mishandling client funds. Courts and bar disciplinary authorities have consistently treated trust account violations as among the most serious offenses an attorney can commit, frequently resulting in suspension or disbarment.
A wire transfer to a fraudulent account constitutes a disbursement of client property to an unauthorized recipient. The fact that the attorney acted in good faith after being deceived does not eliminate the ethical violation. As DeLucademonstrates, courts may impose liability even when the attorney followed standard procedures, if those procedures failed to meet the standard of reasonable care.
Model Rule 1.1, Comment 8 requires attorneys to “keep abreast of the benefits and risks associated with relevant technology.” When deepfake voice cloning can defeat a callback verification procedure, continuing to rely on that procedure without additional safeguards fails the competence standard. The technology threat has evolved. The defensive response must evolve with it.
Model Rule 1.6(c) requires “reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” ABA Formal Opinion 477R (2017) extended this obligation to electronic communications, requiring attorneys to assess the sensitivity of information and select security measures proportionate to the risk. ABA Formal Opinion 483 (2018) further specified that attorneys must monitor for cybersecurity incidents and respond promptly when breaches occur.
Trust account wire transfers involve the most sensitive category of client property. The combination of Rules 1.15, 1.1, and 1.6(c) creates a clear obligation: attorneys handling wire disbursements must implement verification procedures that account for the current threat environment, including the demonstrated capacity of attackers to clone voices and generate convincing video in real time.
The Counterargument: “We Already Verify”
The strongest version of the opposing position runs like this: law firms already follow recommended verification protocols. They call back on known numbers. They confirm with clients directly. They train staff to spot phishing. Requiring additional safeguards imposes unreasonable cost on routine transactions and creates operational friction that slows closings, delays settlements, and frustrates clients who expect efficient fund transfers.
This argument has genuine merit on its operational premise. Verification friction does slow transactions. Clients do expect speed. And the overwhelming majority of wire transfers execute without incident. A 2025 Qualia survey found that 66% of title and escrow professionals encountered seller impersonation attempts, meaning a third did not. The base rate for successful fraud remains low relative to total transaction volume.
The flaw is in the risk calculus. A single successful deepfake wire fraud can eliminate an entire trust account, exposing every client whose funds resided in that account. Unlike a data breach, where the harm is informational and the remediation is containable, a trust account loss is immediate, total, and frequently unrecoverable. Only 19% of firms in the CertifID survey recovered all lost funds. Malpractice insurers increasingly exclude social engineering losses from standard policies, and many policies do not cover IOLTA accounts at all. The Integreon analysis of the DeLucalitigation noted that “not all cyber insurance policies provide coverage for money held on behalf of others such as IOLTA escrow accounts used for transactional purposes.” An attorney who loses trust account funds to a deepfake may discover that no insurance policy covers the loss, leaving personal assets exposed.
Practice-Specific Implications
Real estate practitioners face the most concentrated exposure. Residential closings involve six-figure wire transfers between multiple parties, often coordinated through email chains that attackers can monitor for weeks before striking. FBI data shows real estate wire fraud growing from under $9 million in 2015 to $446 million in 2022 and an estimated $500 million in 2024. CertifID found that 1 in 4 parties in a real estate transaction reported being targeted by fraud.
Litigation attorneys handling settlement disbursements confront similar risk. A $2 million mediated settlement transferred from a trust account on fraudulent instructions exposes the attorney to the full amount. Opposing counsel relationships, where professional courtesy and established trust expedite communications, create exactly the conditions deepfake attackers exploit.
Corporate and M&A practitioners managing escrow accounts for acquisitions handle transactions where a single wire can exceed $10 million. The Arup case demonstrated that attackers will invest significant resources in deepfake production when the potential payout justifies the effort. An escrow account holding acquisition proceeds represents the kind of high-value target that motivates sophisticated, multi-participant deepfake video calls.
What to Do Before Friday
Replace voice-only verification with multi-channel authentication. Confirm wire instructions through at least two independent channels. A phone call plus an encrypted email or secure portal confirmation. If the client or counterparty initiated the instructions by email, verify through a different medium entirely. Never treat a single callback as sufficient for disbursements above your firm’s risk threshold.
Establish pre-agreed authentication codes with repeat transaction parties. Before any engagement involving anticipated wire transfers, establish a shared passphrase or rotating code with the client. Document the code in the engagement letter. Require the code for any change to previously provided wiring instructions.
Implement mandatory waiting periods for changed wiring instructions. Any request to modify previously confirmed wire routing, regardless of source, triggers a mandatory 24-hour hold and independent verification through the original contact information on file. Urgency is the primary tool of social engineering. The waiting period neutralizes it.
Audit your malpractice and cyber insurance coverage for trust account exclusions. Request written confirmation from your carrier that IOLTA and escrow account losses from social engineering attacks, including deepfake impersonation, fall within your policy’s coverage. If they do not, you are personally liable for any loss. As I discussed in a prior post on cyber insurance compliance, the gap between what attorneys assume their policies cover and what insurers actually pay remains one of the profession’s most dangerous blind spots.
Train every person in the firm who touches wire transfers. The finance clerk who executes a wire and the partner who authorizes it both need to understand that a convincing voice on the phone or a familiar face on a video call no longer constitutes reliable identification. Demonstrate deepfake audio and video examples in training. When people hear a cloned voice for the first time, the abstract threat becomes visceral.
The Sheriff’s Deputy’s Family
The family in the DeLuca case did everything right. They hired attorneys to handle the estate. They trusted the legal system to protect the proceeds of their father’s home. They followed the process.
The process failed them because the attorneys on both sides of the transaction relied on verification methods that the threat environment had already outpaced. The spoofed email looked authentic. The wiring instructions appeared routine. The $442,600 moved in seconds and disappeared.
Somewhere in Pennsylvania, the family of a sheriff’s deputy is sitting in a courtroom trying to recover his estate from the attorneys he trusted to protect it. The $442,600 from the sale of his Florida home is gone. The independent verification that should have stopped the transfer never happened. And two law firms that built their practices on client trust now face the question every firm handling client money will eventually confront: the technology changed, and they didn’t.
This blog provides general information for educational purposes only and does not constitute legal advice. Consult qualified counsel for advice on specific situations.
About the Author
JD Morris is Co-Founder and COO of LexAxiom. With over 20 years of enterprise technology experience and credentials including an MLS from Texas A&M, MEng from George Washington University, and dual MBAs from Columbia Business School and Berkeley Haas, JD focuses on the intersection of legal technology, cybersecurity, and professional responsibility.
LinkedIn: www.linkedin.com/in/jdavidmorris | X: @JDMorris_LTech | Bluesky: @JDMorris-ltech.bsky.social
References
ABA Model Rule 1.1, Comment 8 (Technology Competence) (2012)
ABA Model Rule 1.6(c) (Confidentiality – Reasonable Efforts)
ABA Model Rule 1.15 (Safekeeping Property)
ABA Formal Opinion 477R (May 2017) – Securing Communication of Protected Client Information
ABA Formal Opinion 483 (October 2018) – Lawyers’ Obligations After Electronic Data Breach or Cyberattack
DeLuca et al. v. SutterWilliams LLC et al. (March 2025) – Negligence and Malpractice Suit, Law360
FBI Internet Crime Complaint Center (IC3) – 2024 Internet Crime Report (April 2025)
Arup Deepfake Fraud – CNN Business / Financial Times / The Guardian (February–May 2024)
CertifID – 2025 State of Wire Fraud Report
Qualia – 2025 Special Report: Real Estate Wire Fraud Trends
Deloitte – Generative AI and the Fight for Trust Survey (2024)
CFO Magazine – Deepfake Financial Loss Survey (November 2024)
McAfee – AI Voice Scam Survey (2024)
Barrington, S. & Farid, H. – “People Are Poorly Equipped to Detect AI-Powered Voice Clones” – Scientific Reports, Nature (2025)
Integreon – Cybersecurity, Wire Fraud, and Attorney Liability (April 2025)
Morris, JD – “Why Hackers Target Law Firms: Where All the Secrets Are Buried” – Morris Legal Technology Blog
Morris, JD – “I Was Inside EMC When Hackers Stole the Keys to 40 Million Doors” – Morris Legal Technology Blog
Morris, JD – “Your Cyber Insurance Policy Has a Hole in It” – Morris Legal Technology Blog