Why this matters now
Privilege is now an architectural question.
In November 2025, the UK Upper Tribunal handed down Munir v Secretary of State for the Home Department — the first English judgment to address whether public AI services destroy legal privilege. The ruling was unequivocal:
Case · UK
Munir v SSHD [2026] UKUT 81
"To put client letters and decision letters from the Home Office into a public AI tool, … is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege."
The Tribunal also drew an explicit line between public AI platforms and "closed-source AI tools which do not place information in the public domain" — describing the latter as acceptable for exactly these tasks. Privilege is not suspended. It is gone. Permanently. The supervising solicitor was referred to the SRA. The firm was referred to the ICO.
Self-hosted, audit-chained, never-leaves-the-firm is no longer a sales pitch. It is a practising-certificate question. DONNA is built for that question.
And the law is converging at three layers — the bench, the contracts behind the tooling, and the regulatory pipeline. Each one points the same direction.
Commentary · US
Colorado: the wrapper-contract trap
"If your legal tech product is a wrapper, it may not be all it's wrapped up to be."
Carolyn Elefant — lawyer, AI-skills trainer, founder of MyShingle.com — flagged the structural risk in a recent LinkedIn analysis. A federal district court in Colorado ruled that AI tools may be used on documents under protective order only if contractual privacy protections exist: no training on the data, no third-party disclosure, deletion on request.
The trap for legal-tech: most products are wrappers on a foundation model owned by someone else. The Colorado standard requires the contractual protections to run at both layers — between the firm and the legal-tech vendor, AND between the vendor and the foundation-model provider behind the curtain.
"A robust no-training clause in your contract with the legal tech company does not help if the contract between the legal tech company and the foundational model lacks the same commitment."
DONNA's architectural answer: eliminate the second layer. Self-hosted on the firm's infrastructure, pointed at whatever model the firm hosts alongside it. The architectural answer to a contractual problem.
Regulation · EU + US
The regulatory pipeline is arriving
"Show your work" — the rule, not the aspiration.
Three pieces are landing on the same architectural answer in 2026.
EU AI Act — high-risk-system obligations bind from 2 August 2026. Article 13 requires transparency about how the system reaches its outputs. Article 12 mandates automatic event logging. Article 14 requires meaningful human oversight, not nominal. Each is satisfied automatically by an audit chain that signs every delegated decision and replays end-to-end.
California Rules of Professional Conduct — the State Bar has proposed amendments to six rules requiring attorneys to perform AI analysis, review, and transparency as part of legal-AI use. The proposed rules don't ask lawyers to think differently about AI — they ask for the artefacts (analysis + review + transparency) that an IDR audit chain produces as a side effect.
Personal-sanctions precedent already exists. A US Magistrate Judge sanctioned a lawyer for AI-fabricated citations in May 2026 — a marker that supervisory failure on AI is now a discipline matter, not an abstract guideline.
DONNA is built for these. The IDR primitive is the proof the firm assigns to itself, automatically.
Sources:
EU AI Act (Reg. 2024/1689) · State Bar of California (proposed RPC amendments, May 2026) · This Week in Legal AI (Veronica Lopez, 6 May 2026)