Nuhaa AI Perspective · No. 01 · April 2026

The Five Strategic Questions Every GCC Board Should Ask About AI

By Bassam AlKharashi · Founder and CEO, Nuhaa AI · 8 min read

Most board conversations about artificial intelligence in the Gulf today rotate around a small set of familiar prompts. What are our competitors doing? Which large language model should we adopt? How much should we be spending? When will we see returns? These are reasonable questions. They are also the wrong ones.

The questions that reveal an organisation's true readiness for AI are not technical. They are strategic. And in my experience advising banks, ministries, and family conglomerates across the Kingdom, they are almost never asked at the board table — not because directors lack the appetite, but because the prevailing narrative around AI has been framed by vendors, consultants, and the technology press. That narrative privileges tools over judgement, models over institutions, and announcements over outcomes.

What follows are the five questions every board in the GCC should be asking. None of them require a data scientist to answer. All of them require honesty.

1. What decisions do we want AI to make, and which ones will we never delegate?

The single most clarifying exercise a board can undertake is to draw a line down the middle of a page. On the left: the decisions the institution is willing to let a machine make autonomously. On the right: the decisions that will always remain with a human, no matter how capable the model becomes.

In a sovereign bank, the right column is long. Credit decisions above a threshold. Suspicious activity reviews. Customer hardship assessments. In a ministry, it is longer still. Eligibility determinations for citizen services. Procurement approvals. Anything touching national security.

The point of the exercise is not to constrain ambition. It is to surface a question most leadership teams have never explicitly answered: what is the institution for? AI does not change that answer. It only forces you to state it.

An AI strategy that does not begin with a clear inventory of non-delegable decisions is not a strategy. It is a procurement plan.

2. Where does our data actually live, and who is allowed to see it?

This sounds like an IT question. It is not. It is a governance question dressed in technical clothing.

Most large organisations in the Kingdom do not have a confident answer to either half of it. Data lives across legacy core systems, departmental SharePoints, regulator submissions, vendor environments, and — increasingly — in the inboxes and laptops of senior staff who have learned to extract what they need from official systems and work with it locally. The "single source of truth" is a fiction maintained for audit purposes.

Before an institution can responsibly deploy AI at any scale, the board needs an honest map. Not a target architecture diagram. A map of where data actually sits today, who has access to it, where it crosses borders, and what regulators expect each category to be governed by. PDPL, SAMA, NCA, and sector-specific frameworks each impose different obligations. AI systems trained or operated against ungoverned data inherit every weakness of the underlying estate — and amplify them.

3. What is our risk appetite for being wrong in public?

Every AI system will produce wrong answers. The interesting question is not whether — it is what happens when one of those wrong answers reaches a customer, a citizen, a journalist, or a regulator.

Boards in the Kingdom tend to underestimate this risk by analogising AI errors to existing operational errors: a misrouted payment, a dropped call, a delayed approval. The analogy fails. AI errors are different in three ways. They are produced at scale. They carry the implicit endorsement of the institution that deployed the model. And they are extraordinarily difficult to explain after the fact, because the institution itself often cannot reconstruct why the system produced the answer it did.

The board's job is to set, in advance and in writing, the appetite for this category of failure. Not zero — zero is not a coherent appetite. But a clear articulation of where the institution will tolerate a small amount of AI error in exchange for material productivity gains, and where it will not. Customer-facing chatbots in retail banking? Probably yes, with guardrails. Automated credit adverse action notices? Probably no. The middle ground is where the difficult judgements live, and the board owns them.

4. Do we have the talent to govern this — not just to build it?

The talent conversation around AI in the GCC has been dominated by a single question: can we hire enough engineers? It is the wrong question.

The shortage that will constrain AI adoption in regulated GCC institutions over the next five years is not engineers. It is people who can sit across the table from engineers and ask informed, sceptical questions on behalf of the business — risk officers who understand model behaviour, compliance leads who can read a model card, internal auditors who can challenge a validation report, business heads who can articulate what "good" looks like for an AI system in their domain.

These people are scarcer than engineers. They are also more expensive to develop, because the discipline does not yet have a recognised certification path in the region. Boards that take this seriously are investing now in two-year programmes to build governance fluency among existing senior staff. Boards that do not will discover, eventually, that they have built capability they cannot supervise.

5. What will we measure, and what will we stop measuring?

Most AI programmes in the region report on activity metrics. Number of pilots launched. Number of use cases identified. Number of staff trained. None of these tell the board whether the programme is working.

The metrics that matter are harder to collect and slower to move. Cycle time on the workflows that AI was supposed to compress. Error rates on the decisions AI was supposed to improve. Cost per transaction in the operations AI was supposed to make leaner. Staff retention in the roles AI was supposed to make more interesting rather than redundant.

Equally important is what the board chooses to stop measuring. Pilot counts encourage pilot proliferation, which is the single largest source of waste in enterprise AI programmes today. Boards that retire vanity metrics and replace them with operating metrics signal, more clearly than any strategy document, what they actually expect the programme to deliver.

The measure of an AI programme is not how many models it has shipped. It is how many decisions it has changed.

A closing note

None of these questions require a board to become technical. They require a board to become deliberate. The institutions in the Kingdom that will compound an advantage from AI over the next decade are not those with the largest model deployments or the flashiest announcements. They are those whose boards have answered these five questions in writing, revisited the answers annually, and held the executive team accountable to them.

The cost of not asking them is not a failed pilot. It is an institution that has spent five years and a great deal of money becoming busier without becoming better.

Bassam AlKharashi is the founder of Nuhaa AI. He has spent twenty years building and advising AI programmes inside Saudi Arabia's most regulated organisations — from sovereign banks to ministries.