As artificial intelligence evolves from a buzzword to a boardroom priority, the financial industry finds itself grappling with a deeper question: how can powerful new technologies be effectively integrated without undermining trust? For banks, insurers and regulators, the challenge is less about AI’s capabilities and more about the quality of judgment that surrounds its use.
Today, models can analyse millions of data points and deliver decisions in milliseconds. But for a sector built on fiduciary duty and regulatory scrutiny, faster decisions come with heightened expectations. At the Finance Middle East AI Future Forum in Dubai, this tension played out across a high-level panel featuring senior executives from institutions such as Presight, IFZA and Policybazaar.ae, KPMG, Heriot-Watt University and Alpheya.
What emerged was not a clash of opinions but a sobering consensus: while AI offers real gains, especially in fraud detection, credit underwriting and operational productivity, the path to scaled adoption will be shaped by data integrity, explainability and regulatory harmony.
Fixing the data before the model
In practice, most firms are already encountering the limits of their own infrastructure. Despite AI’s theoretical potential, many institutions struggle with legacy systems, siloed data and unclear governance structures.
Before building models, financial organisations must ensure their data pipelines are trustworthy, auditable and well-integrated. According to industry surveys, over 60% of failed AI initiatives can be traced back to data quality or data architecture issues. Without harmonised and explainable inputs, risk models can produce spurious correlations and unfair decisions.
This is particularly problematic in high-stakes domains like lending and insurance, where AI is increasingly used to personalise risk. Unlike traditional systems, machine learning models adapt over time, which means that oversight, version control and explainability must be embedded at every level.
Navigating a patchwork of global regulation
While AI adoption accelerates, the regulatory frameworks surrounding it are still coalescing. The EU’s AI Act introduces a tiered approach to governance, categorising systems by risk and imposing disclosure requirements. Meanwhile, the US continues to regulate AI through existing sector-specific frameworks. In the Gulf, regulators are experimenting with sandbox environments and innovation accelerators, inviting innovation while preserving oversight.
Matin Jouzdani, Partner at KPMG, warned that this regulatory fragmentation adds complexity. “Even taking that approach, another regulator will pop up and publish something, and then you’re back to the races again,” he said. As a result, many multinational financial firms choose to adhere to the strictest common denominator, usually the EU standard, even in less-regulated markets.

This prudent approach ensures compliance but can also increase costs and slow innovation. It also raises the question: who ultimately sets the benchmark for responsible AI? Institutions themselves must now take the lead in setting internal standards.
Explainability
One of the clearest themes from the panel was the centrality of explainability. In finance, where trust is currency, models must not only be accurate, they must be defensible.

“The client wants to understand how and why a decision aligns with their goals,” said Alexis Calla, Chief AI Officer at Alpheya. “We spend a lot of time creating those narrative layers just to make sure we are capable of not just explaining what the model has been doing, but transforming that into something investors will understand.”
These narrative layers act as interpreters between technical outputs and human reasoning. Whether it’s a risk assessment or a portfolio optimisation, the explanation must resonate with both compliance officers and clients.
For Jonathan Doolan, Head of Finance at IFZA, explainability begins at deployment. “Going through door number one, making sure everything is correct, then moving through door number two, and you keep expanding. That way, it is controlled, understandable For your organisation and for your team,” he said.

This kind of gated deployment helps ensure model stability and trust. However, it also slows down the rollout, especially when staff are still learning how to interpret AI-generated outcomes.
AI in practice
Despite its complexity, AI is already delivering measurable returns. In fraud prevention and AML, machine learning models are helping firms detect anomalies that would be missed by traditional rules-based systems. Neeraj Gupta, CEO of Policybazaar.ae, described how voice analytics are helping his team identify inconsistencies between initial and follow-up calls.

“There is 1% fraud which can really wipe out 80% of the book, because you’re probably paying $1 for $100,000 coverage, that’s where the gaps are. So, what we are trying to do with AI is, is we are trying to evaluate whatever the consumer is sharing as an information is getting validated with his documents, with a live feed.” said Gupta.
In credit modelling and underwriting, generative AI is enabling firms to simulate thousands of scenarios and run stress tests in a fraction of the time it used to take.
These efficiencies, however, also require new types of oversight. Financial institutions must now account for the risks of “model convergence,” where widespread use of similar tools leads to uniform market behaviour. “How do you make sure that not everybody ends up using the same machine, the same tools or actually, only using the machine in one sense, so you end up every machine doing the same stuff,” warned Calla. “And the more you rely on them, the higher the systemic risk.”

Human oversight and organisational culture
Technology aside, most panellists agreed that AI’s most significant constraint is not technical but cultural. Building effective AI systems requires collaboration across multiple functions, including data science teams, as well as legal, compliance, HR and front-line business units.
Doolan noted that financial organisations can only scale AI when staff are empowered and educated. That includes not only risk teams but also senior leadership. Governance frameworks must evolve from static checklists to continuous, collaborative reviews.
Meanwhile, regulators are beginning to mandate human oversight for high-risk models. In the EU, the AI Act requires documented assessments for high-impact use cases, including the right to contest algorithmic decisions.
“AI can support decisions,” said Dr Jelena Janjusevic, Deputy Global Head of Accountancy, Economic and Finance, Edinburgh Business School, Heriot-Watt University Dubai. “But it cannot replace reasoning. It cannot replace ethics.”

Andrew Reakes, Director of Financial Services, Presight echoed the sentiment. He said: The organisations that we’re working with are tied back to the regulation. Particularly in this region, there’s a big focus on upskilling, having the right competencies within the organisation, which allow for the acceleration. However, transparency and control are fundamental to allowing organisations to expand and accelerate moving forward.

“Another aspect that we’re really seeing with the organisations is the importance of transparency. Understanding where the results came from, what they were derived from, and having that connection between the result and, ultimately, the data, etc., is what drives the results,” he added.
“Additionally, some people have the view that AI is there to replace people. Fundamentally, that’s completely the opposite of what we’re seeing. We’re seeing it in the organisations that’re really adopting it at scale, with transparency and control. It’s about how AI supports and augments the workforce to actually get to a result that’s driving return on investment.”
The final takeaway from the session was one of cautious optimism. AI is no longer a novelty in financial services. It is a capability that, when utilised effectively, enhances efficiency, improves decision quality and reduces costs.
But its adoption must be thoughtful, collaborative and transparent.
Whether it’s credit scoring, fraud detection, client onboarding or market forecasting, AI should augment human decision-making, not replace it.
The road ahead will require not only new tools but new mental models. Firms will need to rethink how they measure success, train their employees, engage with regulators and serve clients. And that, the panel made clear, is not a technical challenge, it’s a leadership one.
