EU AI Act
Article 13
Audit chain
On 2 August 2026, the EU AI Act's high-risk obligations enter into force. Article 13's transparency duty is widely paraphrased as a "show your work" requirement. The phrase is correct, but the architectural answer is more specific than most law firms have noticed: a signed, replayable record per AI-assisted decision, with a verifier the regulator can run independently. A policy document does not satisfy this — and the gap matters.
Open source
Stack
Convergence
In May 2026, the open-source legal-AI stack stopped being a list of products and started being a layered architecture. Mike for documents. vLLM for inference. CloseVector for retrieval audit. DONNA for delegation audit. happi.md for the protocol. Five layers, one architectural question — who decided what, on what evidence, from what corpus — and an answer that is converging in real time.
Munir
Privilege
Architecture
Some readers of Munir v SSHD [2026] UKUT 81 have moved from "public AI services destroy privilege" to "open-source AI destroys privilege" in one step. This is a category error driven by an ambiguity in the phrase "open source". The judgment is about where data goes, not what licence the code carries. The architectural answer to Munir is open code, self-hosted inference, no public egress. DONNA is built for that answer.