Deepfake technology is increasingly being exploited to defraud financial institutions, with each incident costing an average of $603,000.
In one notable case, scammers used artificial intelligence to impersonate the voice of a company’s director, convincing a branch manager to transfer $35 million.
The Financial Industry Regulatory Authority (FINRA) has observed a rise in fraudulent activities involving synthetic identification documents and deepfake technology used to open fraudulent brokerage accounts and take over existing ones.
In response, financial institutions are adopting advanced detection technologies, including voice biometrics and real-time fraud detection systems powered by machine learning algorithms.
Regulatory bodies are also issuing guidance to mitigate AI-related cybersecurity risks. For instance, the New York State Department of Financial Services recommends that financial institutions update risk assessments annually to address AI threats like deepfakes and maintain robust response plans.
As deepfake technology becomes more accessible, the financial sector must remain vigilant, investing in advanced detection tools and fostering a culture of awareness to protect against these sophisticated threats.
