DONNA · Home Notes Demo GitHub

August 2026: what Article 13 actually asks of legal AI.

On 2 August 2026, the EU AI Act's obligations on high-risk AI systems take effect. Article 13 — the transparency duty — is widely paraphrased as a "show your work" requirement. The phrase is correct, but the architectural answer is more specific than most law firms have noticed. The instrument that satisfies Article 13 is not a policy document. It is a signed, replayable record per AI-assisted decision.

Most of the reading I've seen on Article 13 over the past month treats it as a documentation problem — write longer instructions for use, publish a model card, attach a risk register to the procurement file. Those things are necessary. They are not sufficient. The text of the Article, read against the logging duty in Article 12 and the post-market monitoring duty in Article 72, asks for something that no policy document on its own can produce: an artefact, per use, that lets a regulator reconstruct what the system did and why.

What Article 13 actually says

Article 13(1) requires that high-risk AI systems be "designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately." Article 13(3) then enumerates instructions for use that must travel with the system: provider details, intended purpose, capabilities and limitations, accuracy metrics, foreseeable risks, human-oversight measures, expected lifetime, and the data logs required to interpret outputs.

The clause that is doing the most work — and is the most often skimmed — is the last one: "the data logs required to interpret outputs." Read together with Article 12 ("Record-keeping"), which obliges high-risk systems to "automatically record events ('logs') over the lifetime of the system", and Article 72, which puts a continuing post-market monitoring duty on providers, the regulatory ask becomes structural rather than narrative.

EU AI Act · Reg. 2024/1689 · Art. 12 + 13

The combined obligation

A high-risk system must, by design, produce records that allow a deployer (and, in due course, a competent authority) to reconstruct an AI-assisted decision after the fact — including the input it received, the output it produced, the configuration under which it was operating, and the human-oversight signal applied at the point of use.

Penalty band for serious non-compliance: up to €35M or 7% of global annual turnover, whichever is higher.

Why a policy document does not satisfy this

A written AI-use policy describes what the firm intended. Article 13 asks what the system did. These are not the same artefact, and the gap between them is where regulator-side disputes will be decided over the next two to five years. A policy that says "we never upload privileged material to public AI" does nothing to demonstrate, on a given Tuesday afternoon at 14:37, that the AI assistant actually routed a particular client question to a self-hosted model rather than to a third-party API.

The artefact that closes that gap is, in plain terms, a per-decision record. It records the input, the model called, the parameters in force, the human who reviewed it, and a cryptographic chain back to the previous record so that nothing can be silently inserted, deleted, or reordered. The academic literature calls this a tamper-evident audit log; the recent ISO and NIST drafts call it model provenance; the engineering literature calls it a hash chain. They are the same construct.

The architectural answer: a signed decision record

DONNA's name for this artefact is the IDR — Intent Decision Record. Each delegated decision the assistant takes produces an IDR: a small structured object containing the input, the chosen action, the model used, the human-oversight verdict, a timestamp, and an HMAC-SHA256 signature that incorporates the previous IDR's signature. The records form a chain. Tamper with any earlier record and every later signature breaks. A regulator with the firm's verifier key can replay the chain and confirm — or deny — that the decisions occurred as the firm describes.

This is not a DONNA invention. The HMAC-chained log construction is at least three decades old in the security literature, and it is increasingly the default in regulated AI workflows. A recent arXiv preprint (Nov 2025) generalises the construction as "constant-size cryptographic evidence structures" for AI workflows. VeritasChain argues, more pointedly, that AI accountability without cryptographic verification is "the security equivalent of an honour system." California's AI Transparency Act, also effective August 2026, points at the same primitive from the consumer-protection direction.

The convergence is not coincidental. When two unrelated regulators reach for the same cryptographic instrument in the same year, the instrument is no longer optional infrastructure for the regulated industry. It is the surface the regulator will inspect.

What this means for a law firm in May 2026

Three honest questions for the AI-procurement file, with twelve weeks to go:

  1. Per decision, can you produce a record? Not a screenshot, not a policy reference — a structured object that names the input, the model, the parameters, the human reviewer, and a signed link to the previous decision. If the answer is "the vendor probably keeps logs somewhere", that is a procurement gap.
  2. Can a third party verify the record without your firm's cooperation? If verification depends on a closed-source vendor confirming that the log is authentic, the regulator has been asked to trust the firm and the vendor. Article 13's transparency duty is undermined by that arrangement, even if the underlying records are honest.
  3. Does the record survive a vendor change? A firm that switches vendors in 2027 should not lose its 2026 audit chain. Open-protocol records — like the one DONNA writes, specified in happi.md v1.1 — are portable across implementations.

The shape of the August deadline

August 2026 is being talked about, in much of the legal-tech press, as a date by which firms must "have an AI policy." That framing is incomplete. The instrument the Act actually asks high-risk providers and deployers to put in front of a regulator is not a policy. It is evidence, per decision, that the policy was followed. Firms that get the policy in place but cannot produce the per-decision artefact will discover, in the first contested matter or first regulator inquiry under the new regime, that the gap matters.

The narrower point is this: the "show your work" framing of Article 13 is correct, and it is a stronger requirement than most readings of it suggest. The architectural answer is well-understood. It is signed records, chained, replayable, with a verifier the regulator can run independently. That instrument is the same instrument Munir implicitly asks for — disclosure to a tribunal of what the AI actually did, not assurances about what it usually does. The two instruments converge on the same primitive.

If your firm is preparing for August. The honest test of any candidate AI tool, in the next twelve weeks, is whether it can hand you — today, on demand — a signed record of a single delegated decision with a chain back to the previous one. If it can, the rest of the procurement file falls into place around it. If it cannot, the policy document is doing work the architecture should be doing.

Donna probat.
Craig Miller · 9 May 2026 · cape town · zurich