The 95% gap. Eight hundred data leaders confessed their AI cannot pass an audit. Mickai™ filed the engineering four weeks earlier.
On 6 May 2026 Dataiku and The Harris Poll published the Global AI Confessions Report: Data Leaders Edition, an eight hundred respondent survey of senior data officers across the United States, United Kingdom, France, Germany, the United Arab Emirates, Singapore, South Korea and Japan. Ninety five per cent admit they could not fully trace their AI decisions end-to-end. Five per cent could supply that trace to a regulator one hundred per cent of the time. The confessions are the description of an absence. The substrate that fills the absence is filed at the UK Intellectual Property Office under one British inventor, Micky Irons, between 30 March and 4 May 2026.
Ninety five per cent cannot trace, five per cent can prove
On 6 May 2026 Dataiku and The Harris Poll published the Global AI Confessions Report: Data Leaders Edition. Eight hundred and twelve senior data officers responded across the United States (203), the United Kingdom (102), France (101), Germany (103), the United Arab Emirates (100), Japan (103), Singapore (50) and South Korea (50). Every respondent worked at an enterprise above one billion dollars in annual revenue or its regional equivalent. The respondent universe was not a small-business sample, not a startup sample, not an early-adopter sample. It was the people inside the largest enterprises on the planet, asked to confess what they could not yet say in public.
The headline number is the cleanest summary of where enterprise AI sits in 2026. Ninety five per cent of data leaders admit they could not fully trace AI decisions end-to-end if a regulator asked them to produce the reasoning. Five per cent are confident they could supply that trace one hundred per cent of the time. Five per cent. The five per cent figure is not a research artefact. It is the percentage of the largest enterprises in the world that have an audit substrate worth presenting to an inspector. The remaining ninety five per cent are running AI agents in production with no defensible chain of evidence behind the decisions those agents make.
The confession sits inside an even more telling pattern. Eighty six per cent of organisations now rely on AI agents in daily operations, with forty two per cent embedding agents so deeply that dozens of core processes depend on them. The autonomy has already been deployed. The substrate that would make the autonomy auditable has not. The Mickai engineering portfolio, thirty one UK patent applications filed at the IPO under one British inventor between 30 March and 4 May 2026, is the substrate that closes the gap the report enumerates. The work was filed before the confessions were collected.
What eight hundred data leaders confessed
The report is structured as a series of admissions. Each admission, read against the engineering layer, maps to a primitive that already exists in filed claim language. The mapping is not editorial. It is mechanical. The report describes a vacuum and the substrate fills it.
Eighty per cent of leaders agree that an accurate but unexplainable AI decision is more dangerous to the organisation than a wrong but traceable one. The figure is the philosophical core of the report. Eight in ten data officers, surveyed across seven jurisdictions, prefer a system that gets a result wrong but allows the failure to be reconstructed over a system that gets a result right but cannot explain how. The preference is not academic. It is the lived position of the people responsible for defending AI deployments to boards, regulators and tribunals. Tracing a wrong answer is recoverable. An unexplainable answer is not.
Seventy two per cent of leaders nevertheless allow AI agents to make critical business decisions without explanation. Eighty one per cent say they would stake their jobs on those calls. The contradiction is the report's most uncomfortable confession. The same population that recognises traceability as more important than accuracy is operating systems that supply neither. Only nineteen per cent always require AI to show its work. Only five per cent require a human in the loop. The remaining majority are running on faith and the absence of an incident that exposes the gap.
Fifty nine per cent have already faced a business issue or crisis stemming from AI hallucinations or inaccuracies in the past year. The incidents are not theoretical. They are recent, they are recurring, and they are accelerating. Fifty two per cent admit they have delayed or blocked an agent rollout because the system could not be made explainable. Forty five per cent have abandoned an off-the-shelf agent because it failed to deliver. Fifty six per cent of CIOs and chief data officers expect to be personally blamed when the next incident lands. Forty six per cent expect to receive credit when an incident does not. The asymmetry of attribution is structural. The blame falls on the data officer because the data officer is the person who signed for the deployment.
Fifty eight per cent worry that AI-generated code creates hidden security vulnerabilities. Fifty five per cent fear AI agents could expose sensitive data to unauthorised parties. Sixty three per cent of US data leaders, ninety per cent of UK data leaders concerned about giving AI access to proprietary data, eighty seven per cent in Japan. Seventy seven per cent of data leaders globally already believe a competitor has deployed a stronger AI strategy than their own organisation. The competitive pressure is the accelerant. The substrate gap is the structural failure underneath.
Why this is structural, not procedural
Procedure cannot close a ninety five per cent traceability gap. Procedure produces governance committees, control catalogues, model cards and policy documents. Each artefact is valuable. None of them survives an inspection that asks for the cryptographic chain of evidence behind a particular decision a particular agent took at a particular moment. The audit gap is not the result of insufficient committees. It is the result of insufficient engineering at the layer where the agent commits an action.
An agent that does not emit a signed audit cannot be governed. An agent governed by a brochure is not governed. The market response to the gap, as the report documents, has been ISO 42001 certifications, NIST AI RMF self-attestations, EU AI Act readiness assessments, governance dashboards, and red-team reports. None of these artefacts is cryptographically verifiable by a third party. None of them survives a vendor acquisition. None is admissible in a tribunal as evidence of what the agent actually did at the moment of decision. The procedural layer cannot solve the engineering problem because the engineering problem is at a layer the procedural layer does not reach.
The five per cent of organisations that can supply a trace are doing something architecturally different from the ninety five per cent that cannot. The architectural difference is what the Mickai filings specify. Signed at commit, append-only and hash-linked, externally verifiable, vendor-neutral schema, trust domain externalised, gated at action rather than at session. Six properties. Each property is a filed UK patent application with a complete specification on the public register at the UK IPO.
How the substrate maps to the confessions
Mickai is the engineering answer. Thirty one filed UK patent applications under one named inventor, Micky Irons (Mickarle Sean Junior Wagstaff-Irons), recorded on the UK IPO public register at numbers GB2607309.8 to GB2610422.4. Nine hundred and fourteen formal claims. Each filing is a complete specification with description, claims, abstract, prior-art search, drawings and Form-1 metadata. Each claim composition addresses a specific confession the Dataiku population reported. The mapping below is a representative subset; the full portfolio is at mickai.co.uk/patents.
The ninety five per cent traceability gap
Decision lineage with ML-DSA-65 signed causal audit ledger, application GB2608804.7, applicant reference MWI-PA-2026-016. The lineage primitive does not just record the decision. It records the causal ancestors of the decision, the retrieved documents, the prompt fragments, the tool outputs, the upstream agent messages, in a directed acyclic graph signed under a hardware-bound key at the moment each ancestor is included. A regulator inspecting an OAR-emitting deployment can walk the DAG backwards from any decision to the contaminated input or the policy violation that produced it. The ninety five per cent traceability gap is the absence of this DAG. The substrate makes the DAG the default operational by-product, not a retrofitted compliance artefact.
The five per cent regulator-ready gap
Open Inter-Vendor Audit Record format with cross-vendor trust-bundle federation, application GB2610413.3, applicant reference MWI-PA-2026-022, twenty formal claims. OAR is a canonical, vendor-neutral schema for signed agent action records. A procurement officer evaluating two agent vendors can compare like-for-like. A regulator can validate the chain in their own browser, offline, in a tab they control, using the browser-resident verifier specified in application GB2610414.1, applicant reference MWI-PA-2026-023. Verification does not require trusting the vendor's hosted endpoint, the vendor's website, or the vendor's tooling. The five per cent of organisations that can produce a trace today are doing it inside vendor proprietary formats that survive only as long as the vendor relationship survives. OAR makes that property structural and exportable.
The nineteen per cent show-your-work gap
PQ-safe attestation and ML-DSA-65 signed tool-invocation ledger, application GB2608806.2, applicant reference MWI-PA-2026-008. Every action that mutates state outside the agent process is signed at the moment of commit, under a hardware-bound key whose private half lives in operator-controlled hardware. The signature is FIPS 204 ML-DSA-65, so the audit chain remains valid through the post-quantum transition that classical-signature audit chains will not survive. Showing the work stops being a configurable behaviour and starts being the default operational mode. The eighty one per cent of organisations that do not always require their agents to show their work are running stacks that cannot show the work. Mickai-core refuses to commit a high-impact action without producing the signed envelope.
The seventy two per cent unexplained-decisions gap
Voice-biometric-gated LLM tool invocation, application GB2608799.9, applicant reference MWI-PA-2026-013. Per-skill clearance-gated execution, application GB2608818.7, applicant reference MWI-PA-2026-021. The first primitive binds the right to invoke a high-stakes tool to a live voice-biometric attestation of the human in the loop, evaluated at the moment the tool would commit. The second treats every skill the agent can invoke as a separately gated capability with its own clearance ceiling, evaluated at invocation time against the current authority of the actor in the loop. Together they make the seventy two per cent gap an architectural impossibility, not a behavioural risk. A high-impact action without a fresh attestation does not commit. The Morse-encoded prompt-injection attack pattern that drained a 174,000 dollar autonomous trading bot in late April 2026 ends at the same gate.
The five per cent human-in-the-loop gap
Pre-commit dry-run simulation, application GB2608802.1, applicant reference MWI-PA-2026-015. First-class actions with compensating rollback, application GB2608800.5, applicant reference MWI-PA-2026-014. The dry-run primitive runs the proposed action against a copy-on-write snapshot before commit, surfaces the structured diff, and refuses to commit until the diff is approved. The compensating-rollback primitive declares, at action-definition time, the inverse for every action the agent can perform, and stores the inverse alongside the signed action record. An action that does commit is reversible by a second signed action. Human-in-the-loop becomes a property of the action surface rather than a policy that humans are expected to enforce by manually checking dashboards.
The fifty six per cent CIO-blame gap
Trust-domain externalisation architectural pattern, application GB2610415.8, applicant reference MWI-PA-2026-024. The trust-domain pattern places the signing keys, the audit ledger, and the verification surface outside the trust boundary of the agent process that produced them. Three independent trust domains. The agent process. The signing hardware. The verifier. None of them can produce the others' artefacts. When the next incident lands and the chief data officer is summoned to the audit committee, the chain is operator property, signed under a key the CIO authorised, verifiable by the auditor without trusting the vendor. The blame still falls on the CIO; the difference is that the CIO can defend it.
The regulator angle, the AI Act, and the five per cent ceiling
The European Union AI Act entered into force on 1 August 2024. The high-risk system obligations begin to apply on 2 August 2026, eleven weeks from the publication of this article. The obligations include record-keeping, traceability, human oversight, and the production of technical documentation sufficient to demonstrate compliance. The text of Article 12 of the AI Act requires high-risk AI systems to be designed and developed with capabilities enabling the automatic recording of events (logs) over the lifetime of the system, with traceability sufficient to identify situations that may result in the system presenting a risk. The wording is procedural at the policy layer; it is engineering at the substrate layer. A system that cannot produce a signed, hash-linked, externally verifiable record at the moment of each decision cannot meet Article 12 except by reconstruction, and reconstruction is what the ninety five per cent of organisations in the Dataiku survey have already admitted they cannot do.
In the United Kingdom, the AI Safety Institute (AISI) has published evaluation methodologies for frontier model behaviour. The evaluations measure properties of models in isolation. They do not measure the deployed action chain a model produces in a regulated environment, because the action chain does not exist as a verifiable artefact in the absence of a substrate. AISI has the credibility and the methodology to extend its work into substrate evaluation. The Mickai filings supply the engineering definition of what substrate evaluation tests for. The Five Eyes joint advisory of 1 May 2026, Careful Adoption of Agentic AI Services, signed by CISA, NSA, ASD ACSC, CCCS, NCSC New Zealand and NCSC United Kingdom, describes the same gap from the cyber security side. The advisory addresses operators of critical infrastructure. It tells them the autonomy has been deployed faster than the governance and the engineering layer is missing. The Mickai filings are the engineering layer.
The five per cent ceiling on regulator-ready traceability is the most actionable figure in the Dataiku report. Five per cent is the procurement signal. Below five per cent there is no defensible position. Above five per cent there is a vendor population that can credibly compete for high-risk deployments. The substrate makes the line crossable by construction rather than by exception. A vendor adopting OAR, a hardware-bound signing key, and a browser-resident verifier moves from the ninety five into the five with no change to the underlying model, the application logic, or the user experience. The substrate sits underneath. The application binds to it through a documented integration surface.
The CIO defence
Fifty six per cent of CIOs and chief data officers, surveyed across eight hundred enterprises, expect to be personally blamed when the next AI incident lands inside their organisation. Forty six per cent expect to receive credit when an incident does not. The asymmetry is not a perception problem. It is the working assumption of every C-suite the data leaders report into. The boards know who signed for the deployment. The auditors know who signed for the deployment. The regulators know who signed for the deployment. When the trace is requested and the trace cannot be produced, the named officer is the named officer.
The Mickai substrate is the audit trail that makes the named officer defensible. Signed at commit means there is a record at the moment of decision. Append-only and hash-linked means a retroactive edit is detectable. Externally verifiable means the auditor does not have to trust the vendor's tooling. Vendor-neutral schema means the chain survives a vendor exit. Trust domain externalised means no party in the loop can claim the chain was tampered with by a counterparty. Action-gated means the autonomy is bounded at the moment it could fail, not assumed safe at session start. Each property is a clause a chief data officer can demand in procurement and have an answer to in an audit. The substrate is the document that goes into the binder when the audit committee asks the named officer to produce the evidence.
The confessions are in. The engineering was filed before the confessions were collected.
Eight hundred data leaders, in seven jurisdictions, surveyed by The Harris Poll on behalf of Dataiku, admitted that the AI systems they have deployed cannot pass an audit. The admission is not a marketing line. It is the working position of the people most accountable for the consequences. The substrate that closes the gap is filed at the UK IPO under one British inventor. Thirty one applications. Nine hundred and fourteen claims. Filed between 30 March and 4 May 2026, four weeks before the report was published, all on the public register, applicant Micky Irons (Mickarle Sean Junior Wagstaff-Irons).
The conversation belongs to the chief data officers, the procurement officers, the auditors, and the regulators who now have an engineering specification to ask vendors against. Procurement asks the vendor to emit OAR. The vendor either does or does not. AISI extends its evaluation methodology to score substrate availability. ARIA opens a procurement-substrate workstream. The Crown Commercial Service updates the AI procurement frameworks to require OAR or equivalent open-substrate audit records by default. The institutional architecture exists. The substrate is on the public record. The next move is the procurement decision that the engineering layer is now part of the policy, not external to it.
Micky Irons is contactable at press@mickai.co.uk. The portfolio is at mickai.co.uk/patents. The substrate is on the UK IPO public register. The conversation is open.
“Ninety five per cent of eight hundred data leaders confessed they could not trace their AI decisions end-to-end. Five per cent could prove the trace to a regulator. The substrate that moves the line is filed at the UK IPO under one British inventor. The confessions are in. The engineering was filed before the confessions were collected.”
Sources and references
- Dataiku and The Harris Poll, Global AI Confessions Report: Data Leaders Edition, published 6 May 2026, eight hundred and twelve respondents across the United States, United Kingdom, France, Germany, the United Arab Emirates, Singapore, South Korea and Japan. Survey conducted online between 20 and 29 August 2025.
- European Commission, Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), in force 1 August 2024, high-risk obligations applying from 2 August 2026, Article 12 (record-keeping and logs).
- UK AI Safety Institute (AISI), evaluation methodology and senior advisor materials, 2024 to 2026.
- Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), Australian Signals Directorate ASD ACSC, Canadian Centre for Cyber Security (CCCS), NCSC New Zealand, NCSC United Kingdom, Careful Adoption of Agentic AI Services, joint guidance, 1 May 2026.
- Mickai patent portfolio, mickai.co.uk/patents, thirty one filed UK patent applications, nine hundred and fourteen formal claims, named inventor Micky Irons (Mickarle Sean Junior Wagstaff-Irons), filed between 30 March 2026 and 4 May 2026.
- GB2610413.3, MWI-PA-2026-022, Open Inter-Vendor Audit Record (OAR) Format with Cross-Vendor Trust-Bundle Federation, twenty claims.
- GB2610414.1, MWI-PA-2026-023, Browser-Resident Offline Post-Quantum Verifier.
- GB2610415.8, MWI-PA-2026-024, Trust-Domain Externalisation Architectural Pattern.
- GB2608806.2, MWI-PA-2026-008, PQ-Safe Attestation and ML-DSA-65 Signed Tool-Invocation Ledger.
- GB2608804.7, MWI-PA-2026-016, Decision Lineage with ML-DSA-Signed Causal Audit Ledger.
- GB2608799.9, MWI-PA-2026-013, Voice-Biometric-Gated LLM Tool Invocation.
- GB2608818.7, MWI-PA-2026-021, Per-Skill Clearance-Gated Execution.
- GB2608800.5, MWI-PA-2026-014, First-Class Actions with Compensating Rollback.
- GB2608802.1, MWI-PA-2026-015, Pre-Commit Dry-Run Simulation.
- FIPS 204, Module-Lattice-Based Digital Signature Standard (ML-DSA), NIST, finalised August 2024.
- Mickai trade mark UK00004373277 (separate registration, not a patent reference).