The Guardian named opaque AI surveillance as the real workplace threat. The substrate makes it verifiable for every party at once.
Professor Nazrul Islam's Guardian piece on 11 May 2026 named the divide that the AI-at-work conversation keeps avoiding: workers in lower-autonomy roles whose working lives are increasingly shaped by opaque, AI-powered systems of surveillance and control. The opacity is structural, and the fix is structural. Mickai™ is engineered as a cryptographic primitive that the worker, the union, the employer, and the regulator can all replay deterministically. Trust-domain externalisation, filed at the UK IPO.
What Nazrul Islam actually argued
Professor Nazrul Islam, writing in The Guardian on 11 May 2026, put a name to a divide that the AI-at-work conversation has been avoiding for two years. The divide is not between people who will keep their jobs and people who will lose them. The divide is between workers in higher-autonomy roles (analysts, consultants, lawyers, academics, managers) who use AI to extend their skills, and workers in lower-autonomy roles whose working lives are increasingly shaped by opaque, AI-powered systems of surveillance and control. For the first group, AI is a copilot. For the second group, in his words, AI is not an assistant; it is a boss. The Guardian framing is editorial. The engineering framing underneath it is that the opacity is structural, the audit gap is structural, and the fix is structural.
The piece is consistent with a wider 2026 line of reporting: a CNBC analysis on 5 May reported that almost every Fortune 500 firm is now tracking AI usage at scale; The Register on 22 April covered the Meta employee surveillance row; The Week ran a piece on lawmakers and unions pushing back against AI surveillance at work. None of those pieces resolves the engineering problem they describe. The engineering problem is that the action chain of the surveilling AI is held under the vendor's key, in the vendor's format, on the vendor's endpoint. The worker does not have a copy. The union does not have a copy. The employer has only the vendor's surface. The regulator has nothing at the primitive layer. Trust in the system therefore rests on the vendor's say-so, which is exactly the position the Guardian piece is naming.
Why the opacity is a substrate problem, not a policy problem
There are good policy responses to opaque AI surveillance: UK GDPR Article 22 (rights related to automated individual decision-making), the ICO's workplace monitoring guidance, the AI Cyber Security Code of Practice from DSIT and NCSC, the EU AI Act provisions on high-risk workplace systems. Each gives a worker a legal handle on the AI that surveilled them. None of them gives the worker a cryptographic handle. A legal handle requires the worker to dispute the AI's output, to subject-access-request the audit trail, to read whichever format the vendor produces, and to take the vendor's representation of that audit trail on trust. The mismatch in cryptographic position is the structural reason policy alone does not close the trust gap. The substrate question is upstream of the policy question.
The Mickai SIOS audit ledger is the substrate-layer answer. Every committed action across a deployed AI workload is serialised in CBOR, hashed under SHA-3-512, signed under the operator's TPM-bound FIPS 204 ML-DSA-65 key, and appended to a hash-linked chain. The chain is the same chain for every party: the operator (employer), the regulator (ICO, Information Commissioner's Office), the union (TUC-affiliated or otherwise), and the worker. The verifier runs in any browser, with no network call. Each party can replay the chain and emit one of four deterministic verdicts per record: VERIFIED, INVALID, STALE, REVOKED. There is no fifth verdict. There is no probabilistic answer. The chain either holds or it does not.
Trust-domain externalisation, in plain terms
The architectural pattern that makes the same chain replayable by four parties at once is trust-domain externalisation. The operator holds the signing key. The schema is open. The verifier is open. The conformance vectors are open. The vendor of the underlying AI is not the trust root; the vendor is a participant. When the vendor changes, when the vendor is acquired, when the vendor fails, the chain continues to verify under the operator's key. The worker who was surveilled at Time T can, at Time T plus eighteen months, walk the chain on a phone in a browser tab and produce a deterministic verdict per action that affected them. That is the structural property that policy alone does not produce. It is the property the substrate produces by construction.
Applied to the workplace, trust-domain externalisation has four immediate consequences. First, the worker has a cryptographic position equivalent to the employer's. Second, the union representing the worker has the same position. Third, the ICO, or any regulator, can inspect the chain without the cooperation of either the vendor or the employer in any way that depends on the vendor's tooling. Fourth, the chain is durable: an AI vendor swap, an employer-employer transfer of a contract, a regulatory inspection at year five, all leave the chain readable and verifiable. None of those properties are achievable on top of vendor-format audit logs.
Where this maps to UK published guidance
The Information Commissioner's Office has published workplace monitoring guidance that names transparency, lawful basis, and worker access to records as structural requirements. The substrate satisfies the worker-access requirement at the cryptographic primitive layer (the chain is exportable on request and verifiable without dependency on employer or vendor tooling). The substrate satisfies the transparency requirement by construction (the schema is open, the conformance vectors are open, the verifier is open). The lawful basis requirement is a policy question for the employer, not an engineering one; the substrate is neutral to it.
UK GDPR Article 22 protects the data subject against being subject to a decision based solely on automated processing that produces legal effects or similarly significant effects. The substrate supports Article 22 challenge by giving the data subject (the worker) a deterministic record of the actions the AI took, the inputs the AI used, the time at which the decision was committed, and the key under which it was signed. The substrate does not predetermine the outcome of an Article 22 challenge; it makes the challenge tractable on cryptographic evidence rather than on procedural representation.
The DSIT and NCSC AI Cyber Security Code of Practice, in its 2024 to 2025 consultation iteration, names operator-side accountability for AI deployments. The substrate is engineered against that requirement explicitly. The trust-domain externalisation pattern is filed at the UK Intellectual Property Office under the GB2607309.8 to GB2610422.4 family. The Mickai trade mark is registered at UK00004373277. The schema, the conformance vectors, and the reference verifier are scheduled for joint open-source release upon UK IPO acknowledgement of the OAR family.
What an employer, a union, and a regulator each do this week
Three parallel actions that fit inside existing engagement models.
- Employer: inventory the AI-touched decisions that affect workers in lower-autonomy roles (rostering, performance scoring, leave decisioning, customer-call handling, gig-economy task assignment, productivity monitoring). Identify the ones where an Article 22 challenge is foreseeable. Pilot the audit substrate against one of those workloads.
- Union: request, from each affected employer, the format of the audit trail of every AI-touched decision affecting members. If the format is vendor-only, treat that as the employer locking the audit to the vendor. The substrate gives the union the same chain the employer holds, on the union's own infrastructure, replayable independently.
- Regulator: the ICO is open to the engineering conversation that sits underneath its published workplace monitoring guidance. The substrate is the engineering counterpart to that guidance. The conformance vectors and the reference verifier are available for inspection on request to press@mickai.co.uk.
An invitation to the academy
Professor Nazrul Islam's research on worker and workplace AI coexistence, cited in the 2024 Economic Report of the President, is the policy and labour-economics layer above the engineering question. The engineering question (the substrate that makes opaque AI verifiable to every party) is at the layer underneath. The two layers are complementary, not competing. UK academics working on workforce AI policy, labour-rights legal scholars, and trade-union-affiliated research bodies are open to a fifteen-minute briefing at any time. press@mickai.co.uk.
Sources and references
- Nazrul Islam, Forget the AI job apocalypse. AI's real threat is worker control and surveillance, The Guardian, 11 May 2026.
- CNBC, Almost every Fortune 500 tracking AI usage: what it means for workers, 5 May 2026.
- The Register, Meta employee surveillance software row, 22 April 2026.
- The Week, Lawmakers and unions push back against AI surveillance at work, 2026.
- UK GDPR, Article 22 (rights related to automated individual decision-making).
- Information Commissioner's Office, workplace monitoring guidance.
- DSIT and NCSC, AI Cyber Security Code of Practice (2024 to 2025 consultation).
- FIPS 204 (ML-DSA), NIST post-quantum digital signature standard.
- Mickai trade mark UK00004373277, classes 9 and 42, filed 15 April 2026.
- Mickai trust-domain externalisation architectural pattern, filed at the UK IPO.