Posted inFintechAITECHNOLOGY

Can AI win the battle against financial crime? 

The fight against financial crime demands continuous innovation, collaboration and a commitment to responsibly leveraging cutting-edge technologies.

Data
Credit: Pexels

Financial crime is a cat-and-mouse game where criminals will always try new ways to bypass the system. Therefore, continuous innovation and R&D by FIs and tech providers are crucial to prevent criminals from winning every time.

A 2023 Kroll survey of 400 senior leaders and risk professionals across four continents found that to counter a potential uptick in financial crime risks, 67% of respondents globally are planning to invest more in technology. Nearly half of the respondents (49%) cited data integrity as the biggest challenge when implementing new technologies.

Of course, technology itself evolves, opening new avenues to crime. For example, criminals may start using generative artificial intelligence (AI) to falsify documents and KYC information and, therefore banks need to stay on top of such innovations.

“AI’s real power is its capacity to analyse vast amounts of data and identify its unique patterns,” noted Daoud Abdel Hadi, Lead Data Scientist, PDM, Eastnets. “With this ability, financial institutions have recognised AI can play a part in their transaction monitoring system to scrutinise historical transactions and activities to identify any signs of suspicious behaviour.” Indeed, in a recent survey by EY, 99% of the financial services leaders surveyed reported that their organisations were deploying artificial intelligence AI in some manner. 

The two most common AI techniques for fraud detection are machine learning, which learns from past events to identify fraud in new transactions, and anomaly detection, which spots unusual behaviour patterns and deviations. Combining the two allows FIs to detect a wider range of suspicious patterns, both those seen before and new suspicious patterns.

The Central Bank of the UAE’s regulations emphasise having strict controls and processes to ensure fraudulent activities are detected and investigated thoroughly. AI allows FIs to monitor and detect such events with much higher precision and accuracy. Therefore, less time needs to be wasted on false alerts, giving investigators more time to analyse complex cases comprehensively.

Traditional methods of preventing fraud have predominantly relied on pre-defining static rules to monitor activities that follow a particular pattern. A simple example of this is:

If the transaction amount sent to a high-risk country exceeds 10,000 dollars, then raise an alert.

“Unfortunately, rule-based methods operate in a one-size-fits-all approach, which doesn’t reflect the true complexity of behaviour and crime,” said Hadi in an exclusive interview with Finance Middle East. “As a consequence, we’ve seen many of these systems suffer from large volumes of false alerts that need to be manually investigated. This can be extremely time-consuming and costly.”

Hadi explained that behaviour will vary from entity to entity and shift from time to time, so the real advantage of AI is its adaptability and ability to use past events as context when making a decision. “If an entity makes a $10,000 transaction, but it has regularly made similar transactions in the past, then perhaps this shouldn’t be flagged as an alert,” he stated. “Leveraging historical data makes AI a lot more precise than traditional methods. Not only that, but rules are “pre-defined”, meaning only the scenarios that are defined will be monitored.”

And he is right! With anomaly detection techniques, transaction monitoring is not confined to these scenarios. Instead, it can identify when unusual behaviour changes have occurred, making them more effective at detecting unanticipated crimes early.

How can AI and machine learning contribute?

Fraud is complex and involves numerous factors. Detecting such complexity using traditional methods can be imprecise and requires regular tuning to ensure it is up to date. This is where AI has the upper hand. By training it on thousands of examples, it becomes capable of instantaneously detecting patterns that resemble fraud, even if it is not an identical case. 

“Machine learning works within the framework of a feedback loop,” stated Hadi. “First, it is trained on historical events labelled as fraudulent and not fraudulent. Then, it is used as part of a transaction monitoring system that analyses any incoming or outgoing transactions and predicts whether they are suspicious or not.”

If suspicious, a human investigates and classifies it as fraudulent or not. This information is then fed back into the machine learning algorithm, which is scheduled to train itself on new, up-to-date data. After training, the AI model will automatically update what it considers to be suspicious based on human feedback.

Hadi reckons there is no easy and fully-automated way to stay ahead of emerging threats. “AI can be used to detect abnormalities in behaviour, or detect signs of fraudulent activity based on the historical examples it has seen before,” he stated. “Human intervention is needed to support AI models’ feedback. In the case AI wrongly detected fraudulent activity, this would allow to correct and adapt the initial feedback.”

In the rare case a new fraud pattern emerges that isn’t flagged as abnormal, something fraudsters are always trying to achieve, there is the risk these behaviours are not detected. Hence, continuous research and collaborative effort are needed to avoid such cases.

Call for collaboration

There are several ways financial institutions, regulators and tech providers can collaborate to tackle financial crime:

  1. Sharing information on suspicious entities that have had fraudulent activities in the past allows financial institutions to keep a closer eye on those trying to hide their activities by spreading their funds across different institutions.
  2. Comparing tactics and typologies with regulators to scrutinise crime patterns helps expose emerging methods used by fraudsters and money launderers, allowing FIs to enhance their strategies for mitigating such activities.  
  3. A Regulatory Sandbox, which acts as a centralised environment with real (masked) data that providers can experiment with to ensure their AI models are effective, would allow providers to overcome the challenge of needing real-world data to test and validate their monitoring systems. 

Trends to watch 

The push for AI in financial crime prevention brings concerns about its opaque nature, necessitating transparent and explainable policies for trust and compliance. “Explainability is now a key AI trend, ensuring models justify their predictions,” noted Hadi. “Generative AI, especially large language models (LLMs), are set to revolutionise transaction monitoring and investigations by efficiently processing unstructured data, previously underutilised and summarising information to aid investigators.” Graph neural networks (GNNs) are also emerging, blending network visualisations with neural networks to uncover fraud patterns in transaction flows across networks.

The fight against financial crime demands continuous innovation, collaboration and a commitment to responsibly leveraging cutting-edge technologies. By embracing these principles, the financial industry can better protect itself and its stakeholders from the ever-present threat of illicit activities, ultimately safeguarding the integrity of the global financial system.

Tagged: