Verifiable AI actions: the proof layer
What it is: A verifiable AI actions proof layer is cryptographic evidence—signatures, commitments, freshness—that lets an independent verifier confirm a specific payment attempt matched policy, not that a model “sounded confident.”
Below: commitments, signatures, freshness, and what a dispute reviewer can actually recompute—without trusting chat transcripts.
“The model said yes” is not evidence. Neither is a server log line that an action occurred. Proof, in this sense, means a verifier who was not in the room can recompute: these bytes were signed under these keys, this attempt matched these constraints, this nonce had not been spent before. That is the bar for agentic payments when humans are out of the loop.
What gets signed, what a verifier recomputes
- Canonicalise intent — Normalize merchant, amount, currency, line items into a canonical byte sequence.
- Bind policy — Attach policy version and delegation identifier.
- Sign — Issuer or delegate signer produces a signature over the commitment.
- Transmit — Agent presents proof bundle + attempt to merchant and network.
- Verify — Independent verifier repeats canonicalisation and signature checks with issuer keys.
Where current systems fail
Logs of model outputs are not proof. Screenshots are not proof. Without cryptographic binding, “the agent did it” is not reconstructable in arbitration.
Risks and attack surfaces
- Mutable logs — If proofs are not tamper-evident, disputes revert to narrative.
- Replay — Same proof reused for a different cart.
- Key compromise — Signing keys must rotate with revocation.
If your “proof” is exportable only through a vendor dashboard, you have operations tooling, not a proof layer. The test is whether someone with keys and public parameters can disagree with your narrative using the same bytes the network saw.
How verification or authorization is enforced
Authorization decisions consume verified proofs. The proof layer does not replace issuer policy—it supplies evidence that the attempted action matches the signed constraints.
Where stateless verification applies
Verification nodes recompute validity from keys, proofs, and public parameters—no private user database required at the verifier.
How AffixIO approaches this
AffixIO focuses on proof bundles that third parties can verify with published keys and clear canonicalisation rules. The emphasis is on reducing “trust us” surfaces: if a dispute arrives, the artefacts should speak without calling the vendor’s internal database.
- Signature-first — Commitments and policy references are part of what gets signed, not optional metadata.
- Minimal disclosure paths — Merchants see what they must; verifiers do not need full user profiles to validate structure.
- Replay-aware by default — Freshness and nonce checks are part of the proof story, not a separate microservice bolt-on.
Where this fits in agentic commerce
Issuer
Signs or co-signs high-value commitments; publishes keys.
Merchant
Validates proof before fulfilment.
APIs
Machine-to-machine presentation of proof bundles.
Edge / offline
Deferred verification with bounded risk windows.
What this system does not solve
Proofs do not prove the model was “correct,” only that outputs were bound to policy. They do not stop social engineering of the user at delegation time.
Frequently asked questions
The set of cryptographic commitments, signatures, freshness, and policy references that let an independent party verify an action without trusting the agent’s operator.
Yes. Disclosure can be minimal: commitments and selective disclosure designs can limit what merchants see while preserving verifier checks.
Audit logs should store references to verifier inputs and outputs, not narrative text alone. See audit trails for decision systems.
Further reading
Implement stateless verification
Request a technical walkthrough or integration review.