From AI Awareness to AI Fluency
By Bassam AlKharashi · Founder and CEO, Nuhaa AI · 7 min read
There is a particular kind of executive conversation about artificial intelligence that has become depressingly common in the Kingdom over the past three years. It begins with the executive expressing strong support for AI as a strategic priority. It continues with broad statements about transformation, productivity, and the future of the institution. It ends, almost without exception, when a specific question is asked. Which decisions in your business should be assisted by AI in the next eighteen months? What would good look like? How would you know it was working? At that point, the conversation either pivots to the consultants in the room or politely concludes.
This is not a story about ignorance. The executives in question are, by any reasonable measure, well-informed. They read the same publications, attend the same conferences, and consume the same vendor briefings as their international peers. They have a confident grasp of what AI is. What they lack is something different and more demanding: the ability to make confident strategic decisions about AI in the context of their own institution.
The distance between these two states — between AI awareness and AI fluency — is the single most consequential leadership gap in the region today. It is also the most under-invested in. Boards spend on training programmes that produce more awareness. They spend on consulting engagements that produce more analysis. They rarely spend on the structured, deliberate practice that produces fluency.
Awareness is what you can recognise. Fluency is what you can decide. The institutions that will compound an advantage from AI are those whose senior leaders can decide.
What fluency actually looks like
It is worth being concrete about what an AI-fluent executive can do that an AI-aware executive cannot.
An AI-fluent executive can read a model evaluation report and form an independent view on whether the model is fit for the proposed deployment, rather than deferring entirely to the technical team that produced the report.
An AI-fluent executive can sit across the table from a vendor pitching a "transformative" capability and ask the three or four questions that reveal whether the capability is genuinely transformative for this institution, in this regulatory environment, against this existing process — or whether it is a generic capability being marketed without that context.
An AI-fluent executive can articulate, without referring to a slide, the specific decisions in their own business that are candidates for AI assistance, the specific decisions that are not, and the reasoning behind both lists.
An AI-fluent executive can challenge a proposed AI investment on the grounds that the bottleneck in the relevant business process is not the decision the AI is intended to assist, but a different decision earlier or later in the chain — and can produce evidence for that challenge.
An AI-fluent executive can recognise the difference between a successful pilot and a candidate for production deployment, and can distinguish between an AI failure that is operationally embarrassing and one that is institutionally dangerous.
None of these capabilities require coding. None require advanced mathematics. All of them require structured exposure to enough real AI deployments — the successful ones and, more importantly, the failed ones — to develop the kind of pattern recognition that distinguishes fluency from awareness.
Why most leadership-development programmes do not produce fluency
The standard leadership-development response to "we need to be more capable on AI" is to commission an executive education programme. These programmes, even the best of them, produce awareness, not fluency. They are well-suited to teaching what AI is, how it works at a conceptual level, what the major capabilities are, and what the risks look like. They are poorly-suited to teaching how to make difficult AI decisions in the specific context of a specific institution under specific regulatory constraints.
Fluency is not a knowledge problem. It is a judgement problem. And judgement is developed in the way it has always been developed: by repeatedly making decisions in real conditions, observing the consequences, and adjusting. There is no syllabus that substitutes for this. There is no certification that signals it. There is only the accumulated weight of decisions made and observed over time.
This presents an awkward problem for institutions in the Kingdom. The most senior leaders are the people who most need fluency, and they are also the people with the least available time and the highest cost of being publicly wrong while developing it.
A practical path
The institutions I have seen succeed at developing genuine AI fluency in their senior teams have done four things in combination.
They have created a small, recurring forum for AI decisions. Monthly or fortnightly, a defined group of senior leaders meets to make actual decisions on actual proposals — go or no-go on a pilot, scope reduction or scope expansion, escalation or de-escalation of risk. The forum is not advisory. The decisions are real. The consequences are tracked. Over twelve to eighteen months, the participants develop genuine judgement, because they have made real decisions and seen what happened.
They have invested in second-line fluency before first-line fluency. Risk officers, compliance leads, internal auditors, and general counsel have been given priority access to the kind of structured exposure that produces fluency, on the theory that fluent challenge from the second line is the most reliable way to develop fluent decisions in the first line. This is contrarian. Most institutions invest first-line first. The second-line-first institutions consistently produce better outcomes.
They have made deliberate use of post-mortems. Every AI deployment that is shut down, scaled back, or significantly re-architected is the subject of a structured post-mortem attended by the senior team. The point is not to assign blame. The point is to convert the institution's own experience into the raw material from which fluency is built. Institutions that do not perform structured post-mortems are denying themselves the most valuable training data they have.
They have resisted the urge to outsource fluency to consultants. External advisors can accelerate fluency development. They cannot substitute for it. Institutions whose senior leaders rely on consultants to make AI decisions on their behalf produce two outcomes: they make worse decisions in the short term, because the consultants have less context than the leadership team they are advising; and they make worse decisions in the long term, because the leadership team never develops the fluency it needs. The consultants are not the problem. The dependency is.
The cost of staying merely aware
Institutions in the Kingdom that remain at the awareness stage for the next three to five years will not necessarily fail visibly. They will commission programmes, launch pilots, and produce announcements at the same rate as their fluent peers. From the outside, the difference may not be obvious for some time.
The difference will show up in the proportion of those programmes that produce institutional change rather than institutional activity. The aware institutions will produce activity. The fluent institutions will produce change. Over a five-year horizon, the gap between activity and change compounds into a gap that is no longer recoverable by adding more activity.
Awareness was the right investment for the last decade. Fluency is the only investment worth making for the next one.
A note for boards
The board's role in closing this gap is specific and not delegable. The board must demand that the executive team produce evidence of fluency — not awareness — at every AI programme review. The relevant evidence is not the number of training hours completed, the number of pilots launched, or the number of certifications obtained. The relevant evidence is the quality of the questions the executive team asks about its own AI programme, the speed and confidence with which it makes hard decisions, and the willingness with which it shuts down work that is not paying off.
If the board sees that quality, that speed, and that willingness in its executive team's AI conversations, the institution is becoming fluent. If it does not, the institution is still merely aware. The two states are not adjacent. And the second is no longer enough.
Bassam AlKharashi is the founder of Nuhaa AI. He has spent twenty years building and advising AI programmes inside Saudi Arabia's most regulated organisations — from sovereign banks to ministries.
