Sovereignty Is Not an Afterthought
By Bassam AlKharashi · Founder and CEO, Nuhaa AI · 10 min read
There is a pattern I have watched repeat itself across at least a dozen large AI programmes in the Kingdom over the past three years. It runs as follows.
A bank, a ministry, or a national champion launches an ambitious AI initiative. The initial conversations are about possibility — what new products, what efficiency gains, what citizen experience improvements. Architecture diagrams are drawn. Vendors are engaged. Pilots are scoped. Six to nine months in, someone from the second line of defence — a compliance officer, a data protection lead, a CISO — asks a question that should have been the first question. Where is this data going? On whose hardware will it be processed? Under whose jurisdiction does it sit when the model is being trained?
What follows is rarely a clean answer. More often it is a re-architecture exercise that adds months to the timeline, materially reshapes the cost structure, and in several cases I have personally observed, kills the programme entirely.
This is not a failure of regulation. PDPL, the SAMA Cyber Framework, the NCA's Essential Cybersecurity Controls, and the various sector-specific instruments are clear, well-drafted, and broadly aligned with international good practice. This is a failure of sequencing. Sovereignty was treated as a compliance overlay to be applied at the end. It needed to be a design constraint applied at the beginning.
The single most expensive moment in an AI programme is the moment a sovereignty question is asked for the first time after the architecture has been chosen.
What sovereignty actually means in practice
The word "sovereignty" has been so heavily used in regional AI discourse that it has begun to lose meaning. Vendors brand offerings as sovereign. Cloud providers announce sovereign regions. Conference panels debate sovereign AI without defining the term. For an institution actually building AI, the question is concrete and decomposable.
Sovereignty in an AI context has at least four dimensions, and any honest design must answer each separately.
Data residency. Where is the data physically stored, both at rest and in transit, during training, fine-tuning, inference, and logging? "In-Kingdom" is not a sufficient answer; the specific data centre, the specific region within a hyperscaler, the specific replication topology, all matter to the regulator and should matter to you.
Operational control. Who can access the underlying systems? A model hosted in-Kingdom but operated by a foreign engineering team with administrative credentials is, from a sovereignty perspective, only partially sovereign. The control plane matters as much as the data plane.
Legal jurisdiction. Whose courts hear a dispute? Which country's law governs the contract? Under which discovery regime can the data be compelled? These are questions a CIO cannot answer alone; they require general counsel at the table from the first architecture meeting.
Strategic optionality. If the relationship with the underlying provider becomes untenable — for commercial, geopolitical, or technical reasons — what does the institution actually own, and what is portable to an alternative? Sovereignty without portability is a deferred dependency.
Why retrofitting sovereignty rarely works
Once an AI system has been built against a particular set of services, retrofitting sovereignty is not a configuration change. It is an architecture change. Models trained against one provider's infrastructure cannot, in most cases, be lifted unmodified onto another. Vector stores, fine-tuning pipelines, evaluation harnesses, observability — every layer of the stack tends to embed assumptions about the underlying environment.
This is why the institutions that are succeeding with AI in the Kingdom share a common pattern: they made sovereignty decisions before they made model decisions. They selected the regulatory perimeter first, designed an architecture that could operate entirely within that perimeter, and only then chose the models, frameworks, and tools that could live inside it. The order matters more than the choices.
The institutions struggling with AI in the Kingdom share the opposite pattern. They selected models and tools because those were the most exciting choices, then attempted to wrap a regulatory perimeter around the architecture they had inadvertently built. The result is brittle, expensive, and, in several cases, ultimately unshippable.
Sovereignty is not anti-innovation
The argument I most often encounter against this sequencing is that it slows innovation. The frontier moves quickly, the reasoning goes, and an institution that waits for sovereign-compatible versions of every capability will fall years behind.
This argument confuses two different things. It confuses experimentation with deployment. An institution can — and should — experiment freely with frontier capabilities in carefully constructed sandboxes that hold no production data and serve no production decisions. That is a different activity from operating an AI system that touches customers, citizens, or regulated decisions. The first does not require sovereignty. The second does, absolutely.
The institutions that move fastest in regulated environments are not those that ignore sovereignty constraints. They are those that have invested in two parallel tracks: a fast experimentation track without sovereignty constraints and without production exposure, and a deliberate deployment track that operates entirely within a sovereign perimeter. Each track has its own velocity and its own purpose. Confusing them is what produces the failure pattern I described at the beginning.
A note on cost
Sovereign architectures cost more on a per-unit basis than borderless ones. This is true and worth acknowledging. It is also a narrower truth than the headline suggests.
The total cost of a sovereign architecture is the per-unit cost plus the cost of every retrofit, every regulatory finding, every customer trust event, every loss of strategic optionality. When this fuller calculation is performed honestly, sovereign architectures often come out ahead — sometimes dramatically — for institutions in regulated sectors. The institutions that report the highest sovereignty costs are usually those that are pricing only the per-unit number and ignoring everything else.
Sovereignty is not a tax on AI. It is the price of building AI that is allowed to keep operating.
What this means for boards
The practical implication for boards is simple. When the executive presents an AI strategy, the first question should not be what models will we use? It should be what is the sovereignty perimeter inside which this strategy will operate, and is that perimeter signed off by the second line of defence and general counsel?
If that question cannot be answered cleanly in the first ten minutes of the conversation, the strategy is not yet ready for the board. It is a procurement plan in search of a sovereignty story. The Kingdom has had enough of those.
Bassam AlKharashi is the founder of Nuhaa AI. He has spent twenty years building and advising AI programmes inside Saudi Arabia's most regulated organisations — from sovereign banks to ministries.