Home
MICKAI™
Article · 6 May 2026

What sovereignty actually means: the benefits of controlling your own data

There is a moment, increasingly common, when a phone shows a person an answer to a question they never said out loud. Cloud-AI is not magic. The dataset has simply got wide enough that prediction crosses the line into anticipation. That dataset is a dependency, and dependency is the opposite of sovereignty. This article is what sovereignty actually means as an architectural property, and the seven concrete benefits that follow once your AI runs under your authority instead of theirs.

Author
Micky Irons
Published
6 May 2026
sovereign-aidata-sovereigntyon-devicemickaisios

The unsettling moment

There is a moment that, in May 2026, a great many people have already had. A phone shows them an answer to a question they did not type. A feed serves a thought they had not yet articulated. A keyboard suggests, mid-sentence, a word that was the next word in their head a half second before their thumb arrived. The reasonable explanation is no longer that the device is listening. The device has been listening for years and we accepted that. The reasonable explanation is that the dataset has got wide enough. The aggregate of typing cadence, scroll velocity, dwell time on a paragraph, the pulse of a thumb hovering, the way one app gets opened forty seconds before another, the specific minute in the working week a particular person tends to look up a particular kind of thing. It is not telepathy. It is prediction at a sample size where prediction crosses the line into anticipation.

The TikTok that captures this best, currently sitting at thirty one thousand likes, opens with a person staring into their phone and saying out loud what almost everyone has now said in private. We accepted that our phones record our voices. The phones now anticipate thoughts we never said out loud. That is not a coincidence. That is the dataset getting wide enough.

The interesting question is not whether the unease is justified. The unease is justified. The interesting question is what the unease is asking for. People say privacy. What they are actually describing is sovereignty. Privacy is the right to keep something secret from another party who could otherwise see it. Sovereignty is a different and stronger property. Sovereignty is the structural condition where the other party never had access in the first place, because the architecture made access impossible by construction.

Privacy is a contract. Sovereignty is a topology. Contracts can be breached, renegotiated, rewritten in a terms-of-service update at midnight, or evaporated by an acquisition. Topology cannot. If the data has not left the device, no policy update on the other side of the world can change that fact. This is the entire substance of what cloud-AI cannot offer and what an on-device AI substrate, built right, can.

How we got here: cloud-AI as architectural dependency

Every cloud-AI product on the market in May 2026 ships an identical structural deal, even though the marketing varies. The user supplies the data, the prompts, the documents, the voice samples, the meeting transcripts, the calendar, the contacts, the location history, the queries, the corrections, the retries, the rephrases. The vendor supplies the model and the inference, and in exchange retains the right to use a portion of those signals to refine the model, market to the user, defend a regulatory action, comply with a subpoena, or sell the company. The user has no architectural recourse against any of those outcomes. Each is a contract clause that can be edited unilaterally between releases.

The deal is not malicious. It is the only deal a cloud architecture can structurally offer. Inference happens where the weights are. The weights are on the vendor's hardware. Therefore the input has to travel to the vendor's hardware. Therefore the input is, at the moment of inference, in the vendor's trust domain. Everything downstream of that fact is a promise. Promises are governed by terms of service, by jurisdiction, by acquisition outcome, by whether the company is still in business, by whether a court has ordered disclosure, by whether the next employee with database access is honest. The single architectural fact, the input crossed into the vendor's trust domain, is the fact that no contract can undo.

From the vendor's side, the user's context is the training data, the user's inputs are the next quarter, and the user's refusal to participate is the churn metric. None of that is a hidden agenda. It is the operating model. The unease people feel when their phone anticipates a thought is the rational response to having lived inside that operating model for long enough that the dataset got wide enough to demonstrate what it could do.

The four sovereign-AI lies

The market response has been a cluster of marketing positions that use the language of sovereignty without changing the architecture. Each is worth naming, because the closer one of these lies sits to a real property the harder it is to tell that the property is missing.

The first lie is cloud private mode. A cloud product offers a setting that says your prompts will not be used to train the next model. The setting is a contract clause. It does not change the topology. The prompt still travels to the vendor's hardware to be inferred. The setting is a promise about what happens after the inference. A future release can change the promise. An acquisition can void the promise. A breach can violate the promise without anyone noticing. The data still left the device.

The second lie is on-device labelling. A vendor describes a feature as on-device because some component of the pipeline runs locally. Speech-to-text might be local. The actual reasoning, the part that needs the model, runs in the cloud. The marketing then truthfully says the keyboard is on-device while the meaningful part of the work crosses the boundary. A user who hears on-device hears sovereignty. The vendor knows that what shipped is a hybrid that, at the architectural level, behaves like a cloud product.

The third lie is we do not train on your data. The literal claim may be true. It is also incomplete. Training is one of many ways a vendor can use the data. Personalisation, abuse review, debugging, regulatory compliance, fraud detection, advertising relevance, support tooling, product analytics, internal research and acquisition diligence are all uses that fall outside the narrow definition of training. The narrow definition lets the marketing be technically truthful while leaving the actual data flow unconstrained. The data still left the device.

The fourth lie is data residency equals sovereignty. The vendor stores the data in the user's country. The vendor's parent is incorporated in another country. The parent's jurisdiction asserts extraterritorial discovery rights, the United States CLOUD Act being the most explicit example, but every major jurisdiction has equivalents. Data residency is a deployment topology choice. Sovereignty is a trust-domain choice. The two are not the same. A British data centre operated by a US-headquartered company is, at the legal layer, reachable by US discovery process. The UK Information Commissioner's Office, the European Data Protection Board, and several national security advisories have said this in print. The data residency claim is not false. It is just not the property the user thought they were buying.

Each of these lies has a specific shape. None of them is reducible to a contractual fix. All four point at the same architectural fact: if the input crossed the boundary, the property the user wanted is gone, regardless of the words on the page.

What sovereignty actually means as an architectural property

Sovereignty, as an architectural property rather than a marketing claim, has six characteristics. The vendor either has them or does not. Each is testable by inspection. Each maps to a filed Mickai patent application on the UK IPO public register, named inventor Micky Irons, applications GB2607309.8 to GB2610422.4.

On-device by construction

The first characteristic is the absolute one. There is no upstream egress, no remote inference, no remote retrieval, no telemetry, no anonymous analytics. The substrate runs in full on operator hardware. Mickai is the first AI substrate I have built where the construction is the answer rather than the configuration. The configuration cannot be set wrong, because there is no configuration that turns egress on. The architecture forecloses the option.

On-device by construction is the difference between a setting and a property. A setting is something a future release can quietly change. A property is something an inspector can verify against the running system. Mickai's six subsystems, Multi-Brain Orchestration, Agent Tooling, Knowledge & Memory, Artifacts, Vinis Voice, and the Governance Layer, all run on the operator's hardware, signed under operator-controlled keys, with no upstream destination. The user does not have to trust the vendor's promise about egress. The user has the topology to inspect.

Hardware-bound identity

The second characteristic is that the substrate's identity is sealed against the hardware itself. The operator key is bound to the device's TPM, evaluated against the platform configuration register measurements taken at boot. If the boot chain changes, the key is unsealable. If the device is cloned, the clone cannot speak. If the disk is exfiltrated, the disk reveals nothing the new hardware can use. This is the structural difference between a password and a property of the running silicon. A password is a string that can be reproduced. A TPM-bound key is a function of the silicon that produced it, and silicon does not copy.

Hardware-bound identity makes the substrate's authority untransferable by construction. A vendor cannot impersonate the operator. A regulator cannot compel the vendor to produce something the vendor never had. An attacker cannot replay an exfiltrated session against the original device, because the session was bound to a key that does not exist outside the original hardware.

Cryptographic audit lineage under ML-DSA-65

Every decision the substrate makes that affects state outside the agent process is signed at the moment of commit, under FIPS 204 ML-DSA-65, into an append-only hash-linked ledger that records the full causal lineage, the retrieved documents, the prompts, the tool outputs, the upstream messages, in a directed acyclic graph that any operator can walk. Mickai patent GB2608804.7 / MWI-PA-2026-016 covers Decision Lineage with ML-DSA-Signed Causal Audit Ledger. Mickai patent GB2610413.3 / MWI-PA-2026-022 covers the Open Inter-Vendor Audit Record format.

The cryptography is post-quantum-safe by design, which matters because the audit chain has to survive the arrival of cryptographically relevant quantum computing in the early 2030s. A signature scheme that breaks in 2031 makes every audit it ever produced retroactively repudiable. ML-DSA-65 is the NIST-finalised lattice scheme. Mickai signs under it from day one. The audit chain remains valid through the post-quantum transition, and the operator does not have to migrate the chain to keep it admissible.

Per-skill clearance gating

Every skill the substrate can invoke, every tool, every external surface, every privileged operation, has its own clearance ceiling. A request signed under one clearance cannot escape into a skill that requires a higher one. Mickai patent GB2608818.7 / MWI-PA-2026-021 covers Per-Skill Clearance-Gated Execution. The boundary is structural rather than configurable. A skill cannot be invoked in a context whose clearance is below the skill's requirement, full stop. The injection attack pattern that drained a 174,000 dollar autonomous trading bot in late April 2026, in which a Morse-encoded prompt smuggled a withdrawal instruction past the bot's reasoning surface, ends at this gate by construction. The skill that signs a withdrawal requires a clearance the prompt could not have.

Voice-biometric authority at the moment of action

A high-impact action requires a fresh, live attestation of the human in the loop, evaluated at the moment the action would commit. Mickai patent GB2608799.9 / MWI-PA-2026-013 covers Voice-Biometric-Gated LLM Tool Invocation. The voice biometric runs on the same operator hardware, never leaves the device, and is not stored in any form that a remote attacker could replay. The substrate has the actor's voiceprint as an enrolled property. The substrate does not have a recording. The biometric template is a one-way derivation, bound to the same TPM-sealed key the rest of the substrate uses.

The point of this primitive is not to make the user authenticate constantly. It is to make the high-stakes actions, the irreversible ones, the regulated ones, the ones that move money or write to disks the operator cannot reconstruct, gate at the moment the action would commit. Authority becomes a property of the running action, not a property of an open session.

A replay-able decision DAG

Every action the substrate takes is reconstructible after the fact. The operator can answer, with evidence, the question what did my AI do, when, on whose authority, against which inputs. Not as a log file the vendor produced. As a signed DAG the operator owns, hash-linked, externally verifiable through the browser-resident verifier specified in Mickai patent GB2610414.1 / MWI-PA-2026-023, which compiles to WebAssembly and runs in the operator's browser without network access.

A regulator inspecting the substrate verifies the chain on their own laptop, in a tab they control, against a key the operator authorised, without trusting the vendor. The chain is the evidence. The vendor cannot have edited the chain, because the hash linkage breaks detectably. The vendor cannot have produced the chain, because the signing key never left operator hardware.

The seven concrete benefits of controlling your own data

Architecture is the means. The benefits are the ends. Once the substrate has the six properties above, seven concrete things become true that are not true of any cloud-AI deployment, however good the marketing is.

Inference is free and always available

There is no API quota. There is no rate limit. There is no per-token billing, no monthly cap, no surprise invoice from a competitor's billing department, no service degradation when an upstream load balancer is overwhelmed by another customer's spike. The substrate runs on the operator's silicon. The marginal cost of a query is the marginal cost of electricity, which is zero on a laptop running on battery. When OpenAI is degraded, when Anthropic is rate-limiting, when Google has an outage, when Cloudflare is propagating a misconfiguration, the operator's substrate is unaffected. Sovereignty is the only AI deployment that has not had a major outage in the last three years, because there is no upstream to outage.

Refusal is uncoachable

The substrate's behaviour is governed by the operator. No third party can shape what it is willing to do, what topics it will discuss, what files it will read, what tasks it will accept. The cloud-AI model is in continuous adjustment by its vendor in response to legal pressure, policy preference, advertiser sensitivity, geopolitical event and quarterly release plan. The on-device substrate is in continuous adjustment by the operator. Refusals can be disabled, scoped, or expanded. The vendor cannot wake up on a Tuesday and decide that a topic the operator depends on is now off-limits, because the vendor is not in the loop.

Compliance is a cryptographic property, not a contractual promise

EU AI Act Article 12 record-keeping obligations begin to apply on 2 August 2026, eleven weeks from the publication of this article. NIST AI RMF, ISO/IEC 42001, the UK AI Safety Institute's evaluation methodology, the Five Eyes 1 May 2026 joint advisory on Careful Adoption of Agentic AI Services, and the Crown Commercial Service procurement frameworks all converge on the same expectation. The deployed AI has to produce a defensible audit chain. A cloud-AI deployment produces a vendor log file plus a contract clause. A sovereign-AI deployment produces a signed DAG, hash-linked, externally verifiable, exportable in OAR format. The first is a promise. The second is evidence. A regulator does not have to trust the operator about the chain. The chain validates by maths.

Trade secrets stay trade secrets

The prompts the operator sends, the documents the operator retrieves over, the strategies the operator drafts in dialogue with the substrate, never train somebody else's model. They never sit in a vendor's logging pipeline awaiting an internal audit, an outsourced review team, a compelled disclosure or an acquisition diligence pull. A patent application drafted on the operator's machine remains the operator's intellectual property up to the moment the operator decides to file it. A board paper drafted on the operator's machine does not appear in a competitor's quarterly retraining batch six months later as a statistical signature in their model's behaviour. The operator's confidential context is structurally confined to operator hardware.

Voice and behavioural biometrics never leak

The voiceprint enrolled on the operator's device is a one-way derivation against a TPM-sealed key. It is not a recording. It is not transferable. It cannot be replayed. The behavioural signal, typing cadence, scroll patterns, dwell time, the things that make the dataset wide enough to anticipate thoughts, never leaves the device. The substrate uses these signals locally to personalise its behaviour to the operator. No remote vendor sees them. No advertising exchange sees them. No data broker sees them. The dataset that would otherwise have got wide enough to anticipate the operator's thoughts is held by the operator, in a form only the operator's hardware can decrypt.

Acquisition risk is zero

Cloud-AI is one acquisition announcement away from a strategic pivot. The vendor pivots, the API gets deprecated, the pricing changes, the model behaviour shifts to suit the acquirer's product roadmap, the data residency promise gets renegotiated under new ownership. The operator who built workflows around the previous behaviour is now rebuilding them. The substrate that sits on the operator's hardware does not have this property. Mickai-the-company can pivot. Mickai-the-substrate on the operator's hardware does not change. The operator's deployment continues to work because the substrate is local, the weights are local, the audit chain is local, the keys are local. The bus factor of the operator's AI is the bus factor of the operator.

Inheritance is engineerable

The substrate has a designed lifecycle that includes the operator's death. Mickai patent GB2608803.9 / MWI-PA-2026-010 covers Hereditas, the post-mortem deactivation, transfer and digital-estate primitive. The operator nominates conditions under which the substrate's authority transfers, contracts, or is destroyed. The conditions are encoded against cryptographic predicates rather than against trust in a specific human executor. When a death certificate is presented through a registered notary signature, the substrate executes the operator's pre-declared instructions, deletes what was meant to be deleted, transfers what was meant to be transferred, and refuses to operate on behalf of any party not in the operator's pre-declared list. Cloud-AI does not have an answer to this. Cloud-AI has a customer-service form. The on-device substrate has the engineering.

What the substrate looks like in plain terms

Mickai is a Sovereign Intelligence Operating System, a SIOS, not an app and not a programme. Six subsystems run on the operator's hardware, all signed under ML-DSA-65, all hardware-bound, all without telemetry. Multi-Brain Orchestration is the routing layer that coordinates twenty five purpose-built reasoning surfaces rather than relying on a single monolithic model. Agent Tooling is the gated, signed-at-commit tool surface that any skill can be exposed through, with per-skill clearance enforced at invocation. Knowledge & Memory is the operator's local retrieval substrate, the equivalent of the cloud vendor's training corpus but held by the operator. Artifacts is the workspace of generated outputs, signed, hash-linked, and reproducible. Vinis Voice is the on-device voice-biometric and synthesis layer, where the operator's voice is the authority surface and the substrate's voice is constrained to operator-authored prompts. The Governance Layer is the cross-cutting audit, attestation and decision-lineage substrate.

Each subsystem can be inspected. Each is verifiable against the operator's hardware. Each has its surface specified in the public patent record. The operator does not have to take the substrate's behaviour on trust. The substrate is the running system the patent claims describe.

The objection: but cloud-AI is more capable

The honest objection to sovereignty as a default is that cloud-AI, in a single 2025-class frontier model, has more raw capacity than any model that fits on operator hardware. This is true today. It is becoming less true every quarter. Small Language Models in the seven-to-thirty billion parameter range, paired with retrieval over the operator's local corpus, paired with the multi-brain orchestration that Mickai's architecture composes, are not a worse trade than a single frontier model. They are a different trade. The frontier model has more general capability. The sovereign substrate has the operator's actual context, the operator's actual files, the operator's actual voice, the operator's actual authority. For the work the operator actually does, this is usually a better composition than a more capable model that is missing the operator's context and is operating under somebody else's authority surface.

Capability without authority is a worse trade than capability with authority. A more capable AI that the operator cannot fully control is, in any high-stakes deployment, a liability the operator does not need. A less capable AI under the operator's authority is, in the same high-stakes deployment, an asset the operator can defend in front of a board, a regulator, a coroner, or a tribunal. The capability gap will close. The authority gap, by construction, will not, unless the architecture is rebuilt. Cloud-AI cannot rebuild itself into sovereignty without abandoning the cloud architecture.

What the demand looks like

Six weeks ago, on 1 May 2026, the Five Eyes intelligence agencies, CISA, NSA, ASD ACSC, CCCS, NCSC New Zealand and NCSC United Kingdom, jointly published Careful Adoption of Agentic AI Services. The advisory addresses operators of critical infrastructure. It tells them the autonomy has been deployed faster than the governance, and the engineering layer is missing. The advisory describes the gap in the language of policy. The gap, in the language of architecture, is the absence of the substrate this article describes.

Three weeks ago, Dataiku and The Harris Poll published the Global AI Confessions Report: Data Leaders Edition. Eight hundred and twelve senior data officers across seven jurisdictions confessed that ninety five per cent of them could not trace their AI's decisions end-to-end for a regulator audit. Five per cent could. Five per cent is the procurement signal. Above that line a vendor can defend itself in front of an inspection. Below it the named officer is on their own when the audit committee asks for the chain. The substrate that moves a deployment from below the line to above it is the same substrate this article describes. I covered the report, and the mapping from each confession to a specific filed Mickai patent claim, in a previous article: The 95% gap. Eight hundred data leaders confessed their AI cannot pass an audit. Mickai filed the engineering four weeks earlier.

The demand is in. The cloud-AI lies have been named by the people best placed to see them, the data leaders inside the largest enterprises on the planet. The substrate that fills the gap was filed at the UK IPO under one British inventor between 30 March and 4 May 2026. Thirty one applications. Nine hundred and fourteen formal claims. Trade mark UK00004373277. The conversation has moved on from whether sovereign-AI is necessary to which engineering primitives it has to ship, and the answer to that is the public patent record.

Sovereignty is an architectural commitment, not a feature

The closing point is the one I wish more people understood before they ship the next AI product. Sovereignty is not a feature that gets added in the third quarter. It is a commitment that has to be made before line one is written, because every line written without it is a line that has to be rewritten if the commitment is made later. The keys have to be hardware-bound from the first commit. The audit ledger has to be append-only and signed from the first action. The egress has to be foreclosed at the architectural layer, not gated behind a checkbox. The skills have to be clearance-gated by construction. The decisions have to be replay-able from day one. A product that bolts these on after launch ships a contract with sovereignty-shaped clauses on top of an architecture that cannot enforce them.

Mickai is what that commitment looks like in code. Signed under FIPS 204 ML-DSA-65. Filed at the UK Intellectual Property Office, named inventor Micky Irons (Mickarle Sean Junior Wagstaff-Irons), applications GB2607309.8 to GB2610422.4 on the public register at mickai.co.uk/patents. Trade mark UK00004373277. Built in the United Kingdom, run on the operator's hardware, owned by the operator, defensible by the operator, inheritable by the operator's nominated successors. Not a product feature. An architectural commitment that has been written down in claim language by the only inventor on the file.

If the unease about phones anticipating thoughts has reached your house, the question is not how to keep the phone from listening. The question is whose AI is in charge of the data the phone has already collected. The answer that the cloud-AI vendors offer is some version of theirs. The answer the architecture this article describes offers is yours. Sovereignty is the topology of the second answer. The benefits are the seven listed above. The substrate is filed, on the public register, in claim language, and shipping in code.

Micky Irons is contactable at press@mickai.co.uk. The Mickai SIOS is at mickai.co.uk. The patent portfolio is at mickai.co.uk/patents. The conversation is open.

Privacy is a contract. Sovereignty is a topology. Contracts can be edited at midnight in a terms-of-service update. Topology cannot. If the data has not left the device, no policy update on the other side of the world can change that fact.

Sources and references

  • Mickai patent portfolio, mickai.co.uk/patents, thirty one filed UK patent applications, nine hundred and fourteen formal claims, named inventor Micky Irons (Mickarle Sean Junior Wagstaff-Irons), filed at the UK IPO between 30 March and 4 May 2026.
  • GB2608804.7, MWI-PA-2026-016, Decision Lineage with ML-DSA-Signed Causal Audit Ledger.
  • GB2610413.3, MWI-PA-2026-022, Open Inter-Vendor Audit Record (OAR) Format with Cross-Vendor Trust-Bundle Federation, twenty claims.
  • GB2610414.1, MWI-PA-2026-023, Browser-Resident Offline Post-Quantum Verifier.
  • GB2608818.7, MWI-PA-2026-021, Per-Skill Clearance-Gated Execution.
  • GB2608799.9, MWI-PA-2026-013, Voice-Biometric-Gated LLM Tool Invocation.
  • GB2608806.2, MWI-PA-2026-008, PQ-Safe Attestation and ML-DSA-65 Signed Tool-Invocation Ledger.
  • GB2608803.9, MWI-PA-2026-010, Hereditas, Post-Mortem Deactivation, Transfer and Digital-Estate Primitive.
  • FIPS 204, Module-Lattice-Based Digital Signature Standard (ML-DSA), NIST, finalised August 2024.
  • Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), Australian Signals Directorate ASD ACSC, Canadian Centre for Cyber Security (CCCS), NCSC New Zealand, NCSC United Kingdom, Careful Adoption of Agentic AI Services, joint guidance, 1 May 2026.
  • Dataiku and The Harris Poll, Global AI Confessions Report: Data Leaders Edition, published 6 May 2026, eight hundred and twelve respondents across the United States, United Kingdom, France, Germany, the United Arab Emirates, Singapore, South Korea and Japan.
  • European Commission, Regulation (EU) 2024/1689 (Artificial Intelligence Act), Article 12 record-keeping obligations applying from 2 August 2026.
  • Mickai trade mark UK00004373277 (separate registration, not a patent reference).
  • Mickai prior article: The 95% gap. Eight hundred data leaders confessed their AI cannot pass an audit. Mickai filed the engineering four weeks earlier (mickai.co.uk/articles/the-95-percent-gap-dataiku-confessions-mickai-substrate).
Originally published at https://mickai.co.uk/articles/what-sovereignty-actually-means-the-benefits-of-controlling-your-own-data. If you operate in a regulated sector or want sovereign AI on your own hardware, the audit form on mickai.co.uk is the entry point.
More articles
4 May 2026
British AI needs an audit substrate, not another white paper. The Bletchley Declaration, the Seoul Summit, AISI, ARIA, and the engineering layer none of them ship.
British AI policy in 2026 has the same structural problem as the rest of the world: there is no engineering layer underneath it. The Bletchley Declaration, the Seoul Summit communique, the UK AI Safety Institute's evaluation work, and ARIA's mission all assume the existence of a substrate they do not specify. Mickai is that substrate. Thirty one filed UK patent applications, nine hundred and fourteen claims, named inventor Micky Irons, filed in Newport, built in the United Kingdom.
3 May 2026
AI agent governance is an engineering problem, not a policy problem. Prompt injection, data poisoning, action hijacking, and the case for verifiable substrate.
AI agent governance has become a policy conversation. It should not be. Prompt injection is an architecture failure. Data poisoning is an architecture failure. Action hijacking is an architecture failure. Evidence destruction is an architecture failure. Mickai is the engineering answer, with eight relevant filed UK patents and an open inter-vendor audit standard now in process at the IPO.
3 May 2026
Autonomous AI agents have a trust problem nobody is fixing. Here is what sovereign agency actually looks like.
Today's autonomous agents can wipe your inbox, move your money, and rewrite your files with no signed record of who told them to and no way to undo what they did. Vendor cloud is the trust root, and that trust root is the breach. Sovereign agents need typed actions, hardware-attested gates, dry-run simulation, compensating rollback, and a signed decision lineage. Mickai has filed the patents.
3 May 2026
Embodied AI without sovereignty is just a faster mistake. Why physical-world agents need signed action lineage, voice-gated invocation, and fleet-level inheritance.
Physical AI is the early-2026 trend the big-tech labs are chasing with weight classes and demo reels. The unanswered question is who signed the action, who can replay the decision chain, and who is allowed to revoke a fleet of robots after the operator dies. Mickai's filed UK portfolio answers all three, and the architecture transfers cleanly from software agents to embodied ones.