Posted inAnalysisExclusive

Presight Director: AI is Becoming Core Financial Infrastructure

AI is no longer an experimental capability at the margins of the organisation, but a regulated component of core financial infrastructure, writes Andrew Reakes, Director Financial Services, Presight AI.

AI Meets Finance: A Presight Director on the Defining Shift
AI Meets Finance: A Presight Director on the Defining Shift

The financial services industry has always been one of the most risk-sensitive sectors in the economy, and necessarily so given that it manages savings, allocates capital and underpins economic stability. Prudence is embedded in its operating model. At the same time, artificial intelligence is accelerating change across the sector, offering efficiency gains, sharper risk insights and new pathways to growth that institutions cannot ignore.

Industry research indicates that more than 80% of financial institutions now use AI across risk, compliance or operational functions, with many systems embedded in live production environments and influencing customer outcomes daily.

I see this moment as a classic “irresistible force paradox,” where an unstoppable force meets an immovable object. A highly regulated, risk-conscious industry built on caution and control is confronting a rapidly advancing technology that rewards speed, scale and experimentation. The tension between these forces has defined much of the recent AI journey in financial services.

The Central Bank of the UAE’s recently announced Guidance Note on the ‘Consumer Protection and Responsible Adoption and Use of Artificial Intelligence and Machine Learning by Licensed Financial Institutions’ represents a decisive response to that tension, making clear that AI must be treated as regulated financial infrastructure and fully integrated into enterprise risk management.

For financial institutions operating in the UAE, this is not a marginal policy refinement but a structural governance shift that reframes how innovation and prudence must coexist.

On Catch-Up: AI Governance

For several years, AI deployment has often evolved faster than governance architecture.

Models were developed within business units to solve discrete problems, validated for technical performance and then scaled into production environments. Oversight frameworks were introduced in parallel and sometimes after deployment decisions had already been made. While this approach accelerated innovation, it also created fragmentation in model inventories, inconsistent documentation standards and uneven explainability mechanisms. The regulatory framework now makes clear that such fragmentation is incompatible with the expectations of a mature and closely supervised financial system.

Boards and senior management are explicitly accountable for AI systems and their outcomes. Institutions must maintain comprehensive model inventories, classify systems according to risk exposure, conduct bias testing and ensure transparency, particularly where decisions materially affect customers.

Customers must have access to meaningful information and the ability to challenge or seek clarification on automated outcomes. Institutions must retain the capacity to suspend or recalibrate systems if risks emerge. In a regulated financial environment, accuracy alone is no longer sufficient; AI must also be demonstrably accountable.

Shift in Thinking on AI

In our work with financial institutions across the UAE, we are already seeing executive discussions shift in tone and depth. In my conversations with companies, there is a growing focus on whether AI is genuinely improving performance and whether governance architecture is robust enough to withstand regulatory scrutiny.

Institutions are examining where models are deployed, who owns them, how they are monitored and whether decisions can be clearly explained under challenge. This evolution reflects a growing recognition that AI is no longer an experimental capability at the margins of the organisation, but a regulated component of core financial infrastructure.

The implications are significant as AI risk cannot remain confined to innovation or technology teams. It must be embedded within conduct, credit, operational and cybersecurity risk frameworks, supported by reporting structures that enable meaningful oversight rather than passive visibility.

Human oversight must correspond to consumer impact and operate within live decision environments, with clearly defined authority to intervene where required. Third-party AI providers do not dilute institutional accountability, and vendor governance frameworks must reflect the same discipline applied internally.

Some may interpret this governance reset as a brake on innovation but that would be a misreading. Clear regulatory expectations reduce ambiguity and give institutions the confidence to scale AI within defined guardrails. When governance is engineered into systems from inception, institutions gain resilience and strategic flexibility. Models that are explainable and well documented are easier to defend before regulators, easier to challenge internally and easier to extend across new products and markets. Growth built on AI that cannot be explained or governed will not withstand sustained scrutiny.

Responding effectively requires deliberate action across executive leadership, risk and technology functions. Institutions should conduct comprehensive assessments of all AI systems deployed across the organisation, whether developed internally or sourced externally, and risk-rate them according to customer impact and regulatory exposure.

Governance responsibilities must be clearly defined at executive level and embedded within enterprise risk management frameworks. Fairness testing, bias mitigation and explainability standards should be integrated directly into development pipelines so that models are defensible by design rather than retrospectively justified.

Human-in-the-loop mechanisms should operate within high-impact workflows with clearly articulated authority to override or recalibrate outputs where necessary.

Unified AI Models

Across the UAE and wider GCC, leading institutions are already moving from isolated AI initiatives toward unified operating models that combine sovereign data infrastructure, structured model governance and embedded oversight.

At Presight, our focus has been on enabling this transition in practice, working alongside banks and regulators to industrialise AI responsibly so that scale is matched by structure. The objective is not to slow innovation, but to ensure that growth is auditable, transparent and resilient under scrutiny.

The UAE has positioned itself as both a financial and technology leader. The governance shift now underway ensures that ambition is supported by accountability. Institutions that redesign their AI integration strategies around transparency, risk discipline and human oversight will not only meet regulatory expectations but will strengthen trust with customers, regulators and investors alike. In doing so, they will convert AI from a perceived black box into a disciplined engine of accountable growth for regional finance.


Stay Up to Date with the Latest Updates at Finance ME

IPO at War: First KSA IPO Gains $67M

Stability Over Flight: UAE Family Offices Hold Firm

Post-War Realities: Why Saudi Arabia Will Accelerate Localisation