MICKAI™ArticlesInside the Trust Agent certificat…
ArticlesFAQPatentsBrainsPress← Home
Article · 3 May 2026

Inside the Trust Agent certificate. What a 27-check AI-agent audit actually checks.

Trust Agent ships every certified agent with a cryptographically verifiable certificate. The certificate is independently auditable: nothing about it depends on trusting Trust Agent itself. This is the format, the 27 checks the agent had to pass, and the verification routine you can run on a Trust Agent certificate from the command line, without an internet connection.

Author
Micky Irons
Published
3 May 2026
Trust AgentAI SecuritySovereign AICryptographic AuditMCP

If a marketplace says its agents are audited, and you cannot verify the audit yourself, the marketplace is the only thing you are trusting. Trust Agent is built so the marketplace is not the trusted party. Every certified agent ships with a certificate, and the certificate is structured so that any user, on any machine, with no internet connection, can verify the chain of evidence end to end. This article is the format and the verification recipe.

Why a certificate at all

Public MCP marketplaces in 2026 ship agents with three claims: a name, a description, and a star rating. None of those claims is signed. None is bound to the actual code that ran through the marketplace's review queue. None survives a silent update to the agent's binary. The user who installed agent v1.4.2 on Tuesday cannot prove on Friday that what they installed is what is now running. The marketplace can rotate the binary under the same name, the same description, and the same rating, with zero notification. The previous Mickai article (mickai.co.uk/articles/mcp-marketplaces-shipped-lolbas-malware) walked the LOLBAS malware case caught inside that exact failure mode. The certificate is the structural answer.

A Trust Agent certificate is a small JSON document, byte-stable, signed under an Ed25519 key, content-addressed, and chained into a public hash-linked ledger. It binds the agent's commit hash to the audit-run identifier, the 27-check result vector, the timestamp, the auditor identity, and the prior ledger record. It is small enough to ship inside the agent's package metadata, and structured enough that the verification routine fits in roughly fifty lines of Python or Rust. The user runs the routine; the marketplace plays no role in the verification.

The certificate, field by field

  • agent_id: stable canonical identifier for the agent. Slug-style, namespaced. Example: trust-agent.law-uk.paralegal-property-conveyancing.
  • agent_version: semver-compatible version string. Bound to the next field, never trusted on its own.
  • agent_commit: SHA-256 hash of the source tree at audit time. The single source of truth. If the user's installed binary does not hash to this value, the certificate does not apply to the running agent and is invalid for that install.
  • audit_run_id: opaque identifier for the audit run that produced this certificate. Maps to a public Trust Agent ledger record. Each audit run produces exactly one certificate.
  • audited_at: ISO-8601 UTC timestamp of the audit run. Used for staleness checks; certificates older than 90 days require a re-audit before re-publication.
  • checks: an ordered vector of 27 booleans, one per check, in the canonical order described below. The hash of this vector is bound into the signature, so a single bit flip invalidates the certificate.
  • checks_version: integer version of the 27-check schema. Increments when the schema is extended. The verifier refuses to validate against a schema version it does not recognise; this is intentional, fail-closed.
  • auditor: canonical identifier of the audit pipeline that ran (currently mickai.audit.v3). Bound under the same key, so the user can refuse certificates from auditors they do not recognise.
  • prev_ledger_hash: SHA-256 of the prior Trust Agent ledger record. Forms the hash chain that prevents retroactive rewriting of the audit history.
  • ledger_seq: monotonically increasing integer ledger sequence number. The ledger is append-only and the sequence is contiguous; gaps are detectable.
  • signature: Ed25519 signature over the canonical serialisation of every preceding field, made under the auditor's per-version signing key.
  • auditor_pubkey: the Ed25519 public key the verifier should validate the signature against. Pinned per auditor, rotated under attestation, never silently changed.

The 27 checks, by category

The 27 checks group into seven categories. The numbers in parentheses are the count of checks in each category. The full vector position of every individual check is fixed in the schema; auditors cannot rearrange.

Category 1: Static analysis (5 checks)

  • 1.1 Tool definitions are well-formed and the declared signatures match the implementations.
  • 1.2 No spawned binary on the LOLBAS catalogue (lolbas-project.github.io) is invoked by any tool, directly or via shell, base64, or runtime assembly.
  • 1.3 No process spawn passes encoded arguments to powershell, cmd, certutil, mshta, regsvr32, msbuild, wmic, or any other LOLBAS-list binary, even when the binary is referenced by a relative path.
  • 1.4 The package declares no native or postinstall hook that mutates the host outside the package directory.
  • 1.5 The dependency graph contains no package on the Mickai supply-chain blocklist (covering known typosquats, hijacked maintainer accounts, and packages whose attestations have been revoked).

Category 2: Behavioural simulation (5 checks)

  • 2.1 The agent runs against a synthetic prompt corpus inside a copy-on-write shadow workspace; no syscalls escape the shadow boundary.
  • 2.2 No syscall in the trace matches the destructive-pattern corpus Sentinel uses (rm -rf against home or system paths, git --force / --include-untracked, --accept-data-loss, cloud-platform terminate-instances and delete-stack, kubectl delete against production identifiers, SQL DROP and TRUNCATE against unconfirmed targets).
  • 2.3 The agent does not attempt to write to any path that resolves outside its declared workspace, even via symlink or junction-point traversal.
  • 2.4 The agent does not attempt to read from any path containing common secret stores (.aws, .ssh, .gnupg, Windows DPAPI vaults, macOS Keychain) unless those stores are declared in the agent manifest.
  • 2.5 The agent does not exhibit observable backoff, sleep, or behavioural divergence when the trace is being recorded versus when it is not (anti-evasion check).

Category 3: Network surface (4 checks)

  • 3.1 Every outbound TLS endpoint is on the agent's declared allowlist; any unsanctioned destination fails the audit even if reached via an IP literal.
  • 3.2 No payload pattern matches data-exfiltration heuristics (no large opaque uploads, no chunked covert channel timings, no unusual content-type-with-binary mismatches).
  • 3.3 No DNS lookup resolves to a domain on the Mickai threat feed (covering known C2 infrastructure, malicious CDN abuse, recently registered short-lived domains used for one-shot deliveries).
  • 3.4 No outbound request includes credentials, tokens, or environment-variable values verbatim. Every observed credential pattern must be passing through Sentinel-style placeholder substitution.

Category 4: Secret hygiene (4 checks)

  • 4.1 No secret pattern (Luhn-validated card, AWS / GCP / Azure access keys, Stripe keys, Slack tokens, GitHub PATs, JWTs) is present in any tool definition or static fixture.
  • 4.2 No tool reads .env files unless the manifest declares the read and the audit confirms the read is followed by placeholder substitution before any outbound call.
  • 4.3 No tool writes secrets to logs, telemetry, error reports, or stdout in a recoverable form.
  • 4.4 No tool stores or caches credentials beyond the lifetime of the invoking session.

Category 5: Reproducibility (3 checks)

  • 5.1 The agent builds reproducibly from the declared commit. Two independent builds from the same commit produce byte-identical artefacts after the documented stripping pass.
  • 5.2 The package lockfile is present, internally consistent, and the resolved dependency tree matches the declared lockfile to the byte.
  • 5.3 The audit run records the full toolchain version set; downstream verifiers can reproduce the audit using the same toolchain.

Category 6: Manifest correctness (3 checks)

  • 6.1 The agent's declared capabilities (read scopes, write scopes, network destinations, tool surface) match what the behavioural simulation observed. Excess capability fails the audit. Under-declared capability also fails (agents must surface what they actually do).
  • 6.2 The agent declares its rate-limit posture (per-tool, per-minute) and the simulator confirms the limits are honoured under load.
  • 6.3 The agent declares its data-residency posture; if any outbound endpoint is not in the declared residency region, the audit fails.

Category 7: Maintainer and provenance (3 checks)

  • 7.1 The agent's source repository is signed under a maintainer key whose fingerprint Trust Agent has previously seen and not revoked.
  • 7.2 No commit in the audited tree references a commit author whose identity has been associated with revoked agents inside the Mickai threat catalogue.
  • 7.3 The publication account on the originating marketplace (where applicable) was active at audit time and not under any active reputation flag.

Verifying a certificate, end to end, offline

The verification routine takes three inputs: the certificate JSON, the auditor public-key bundle (shipped with the Trust Agent CLI, also published at trust-agent.ai/keys), and the agent binary or source tree. The routine produces one of three outcomes: VALID, INVALID (with a structured reason), or STALE (certificate is older than the rotation window for that auditor).

The verification steps

  • Step 1. Parse the certificate, refuse if checks_version is unrecognised.
  • Step 2. Compute SHA-256 of the local agent source tree (or the local installed binary, depending on what was audited). Compare to certificate.agent_commit. If mismatch, emit INVALID with reason agent_commit_mismatch and stop.
  • Step 3. Look up auditor_pubkey in the local key bundle. If not present or revoked, emit INVALID with reason auditor_unknown_or_revoked and stop.
  • Step 4. Reconstruct the canonical serialisation of every signed field in fixed order, hash with SHA-256, verify the Ed25519 signature against auditor_pubkey. If invalid, emit INVALID with reason signature_invalid and stop.
  • Step 5. (Optional, requires connectivity, but the cert is verifiable without this step.) Walk the prev_ledger_hash chain from the latest known ledger head down to this certificate's ledger_seq. If the chain does not link, emit INVALID with reason ledger_chain_break.
  • Step 6. Check audited_at against the auditor's declared rotation window. If older, emit STALE.
  • Step 7. Otherwise emit VALID with the audited_at timestamp and the 27-check vector for the user's review.

Step 5 is the only step that benefits from connectivity, and even there the chain walk can be done against a previously cached ledger head; the user never has to trust Trust Agent's live infrastructure to perform the verification. Steps 1 through 4 are sufficient to prove that this certificate was signed by an auditor the user trusts and that it applies to exactly the agent the user has installed. The hash chain walk in Step 5 promotes that proof from per-certificate to system-wide: it proves that the certificate was not silently retracted from the public ledger and that no later certificate has invalidated it.

What this gives the user

Five things, in plain terms.

  • The agent the user installed is the agent the auditor saw. A silent update is detectable because the agent_commit no longer matches the running binary; the verifier emits INVALID and the user knows to re-pull.
  • The audit was run by an auditor the user has chosen to trust, not by an arbitrary marketplace party with reputation incentives the user cannot inspect.
  • The audit covered a fixed, schema-versioned set of checks that the user can read and reason about. The vector is exposed; nothing is hidden behind 'we audited it'.
  • The audit cannot be retroactively rewritten because the ledger is hash-chained and append-only. A revoked certificate is visible as a revocation entry, not as a deleted record.
  • The verification works offline. The user's air-gapped, regulated, or sovereign deployment is not coupled to Trust Agent's availability for the audit to be meaningful.

Where this sits in Mickai

Mickai is the sovereign AI operating system that runs entirely on the user's hardware under 21 filed UK patent applications and 675 cryptographically signed claims (UK Intellectual Property Office application UK00004373277, sole inventor Micky Irons). Trust Agent is the marketplace surface built on top of the Mickai audit pipeline. Sentinel, the Mickai sub-component covered in mickai.co.uk/articles/sentinel-stops-ai-agents-from-wiping-your-data, is the runtime perimeter that gates what an installed agent can actually do at execution time. Together they close the loop: nothing reaches a Mickai-protected machine without an auditable certificate, and nothing the agent does at runtime escapes the syscall-level perimeter. Mickai is held privately by its founder; the engagement model is direct.

Sovereign means verifiable without trusting the verifier's infrastructure. The cert is small. The chain is open. The user holds the proof.

Mickai manifesto

Sources

  • Trust Agent: trust-agent.ai (256 audited agents across 20 industries, every certificate independently verifiable).
  • Mickai patent portfolio: mickai.co.uk/patents (21 filed UK patent applications, 675 signed claims, application UK00004373277).
  • LOLBAS Project: lolbas-project.github.io (canonical catalogue used by Static Analysis check 1.2).
  • Ed25519 signature scheme: RFC 8032.
  • Previous Mickai article: mickai.co.uk/articles/mcp-marketplaces-shipped-lolbas-malware (the LOLBAS find that motivated the certificate format).
Originally published at https://mickai.co.uk/articles/inside-the-trust-agent-certificate-27-check-audit. If you operate in a regulated sector or want sovereign AI on your own hardware, the audit form on mickai.co.uk is the entry point.
More articles
3 May 2026
Federated fleet coordination. Why sovereign AI scales horizontally across departments without surrendering tenancy.
A single sovereign AI is a workstation. A fleet of cooperating sovereign AIs is the substrate of a Whitehall department, an NHS trust, a defence prime, or a county council. Mickai composes federated fleet coordination on top of the per-machine substrate so a hundred machines can share signed memory, replicate audit ledgers, and arbitrate decisions without any of them ceding tenancy or key custody to a central party. Filed under Patent 17.
3 May 2026
Pre-commit dry-run simulation. Why every action an AI coding agent takes should be simulated before it is committed.
AI coding agents already broke things in 2026. The fixes the industry shipped (better disclaimers, post-hoc rollback) are reactive. Pre-commit dry-run simulation is preventive: the agent's planned action runs against a sandboxed copy of the workspace, the simulator emits the diff, and only after a human or policy approval does the action commit to the real workspace. Mickai's primitive (Patent 13) composes this over every AI coding agent on the host. Sentinel does the runtime gating; Patent 13 does the simulation.
3 May 2026
ChatClone: when the AI voice on the phone is a deepfake, the attestation has to be the answer.
By Q4 2025 voice-deepfake fraud against UK SMEs alone passed eighty million pounds. The telecoms-side answer (improved caller ID) addresses the carrier; it does nothing about the audio. ChatClone is the Mickai voice-attestation primitive: every utterance from the user is signed in real time under a hardware-bound key that no other party can hold, and any party that wants to verify the voice is genuine queries the public verification surface in real time. Filed under Patent 09 of the Mickai portfolio.
3 May 2026
AudioSeal: a dual-layer watermark for AI-generated audio that survives codec, compression, and re-recording.
Single-layer audio watermarks die at the first codec re-encode. AudioSeal carries one perceptual-domain payload and one cryptographic-domain payload, both keyed, both designed to survive realistic broadcast, platform-upload, and re-recording transformations. Filed under Patent 11 of the Mickai portfolio. This is how it works, what it gives downstream verifiers, and why provenance is the only sustainable answer to the AI-audio surge.