29 January 2026: FATF release AI and Deepfake Horizon Scan

GSC Logo Purple

The Financial Action Task Force (FATF) has recently released a horizon scan document regarding the impact of Artificial Intelligence (AI) and Deepfakes on Money Laundering, Terrorist Financing and Proliferation Financing (ML/TF/PF). This report highlights current and emerging AI risks and vulnerabilities through an AML/CFT/CPF lens.

 

The document aims at raising awareness among key stakeholders, namely national authorities, Financial Institutions (FIs), Virtual Asset Service Providers (VASPs), Designated Non-financial Businesses and Professions (DNFBPs) to support stronger regulatory and operational responses to the associated ML/TF/PF risks.

 

The full document contains typologies and flags relevant to the sector, which are not included in this summary. As such, Operators are urged to review the document and consider all relevant typologies/emerging risks as part of their ongoing risk management framework. However, this summary provides some key takeaways from the document. The GSC will continue to share relevant documents such as this with stakeholders in line with its key primary objectives.

A cell phone with a red x and a card

AI-generated content may be incorrect.

A Rapidly Evolving Landscape

AI and Deepfake technology can be used to impersonate individuals, spread misinformation, facilitate fraud and other illicit activities. Deepfakes are increasingly prevalent in recent years and can be used to bypass traditional AML/CFT/CPF controls and manipulate systems with alarming ease to commit ML/TF/PF and predicate offences such as fraud. 

These technologies are being used increasingly used in complex cyber-enabled fraud schemes, phishing attacks, financial exploitation of vulnerable groups (such as the elderly), online romance scams, and online child sexual exploitation. By exploiting the ability to disguise, manipulate, and anonymise identities, criminals are increasingly using deepfakes to expand the complexity, scale, and reach of their operations. The Document highlights that risks from deepfakes are amplified in three key areas: 

 

Growing reliance on biometric verification

Widespread adoption of facial recognition and video-based KYC creates opportunities for deepfake manipulation.

Persistent lag in technology adoption

Many AML systems remain ill-equipped to detect synthetic content, and compliance frameworks have yet to address these vulnerabilities.

Challenge of cross-border complexity

The interconnection of global financial systems complicates digital identification or the acceptance of remote identity verification, allowing criminals to exploit weaknesses in Anti-Money laundering regimes.

 

Detection and good practices

The rapid evolution of AI deepfakes has created a technological “arms race”. To effectively counter the growing threat of AI-enabled deepfakes, financial institutions and law enforcement agencies must not only strengthen detection capabilities but also increase their understanding of the technology and embrace AI as a proactive tool. The document suggests the following good practices for the public and private sectors:

 

Training

  • Distinguishing between authentic and falsified content now requires advanced technical expertise;
  • Educating compliance teams working closely with technological service providers;
  • Training for the public and private sectors to understand the broader context and risks;
  • The threat is constantly evolving, so training should be kept up to date.

Cooperation

  • Participants in the June 2025 FATF roundtable emphasised the importance of partnerships between public authorities, the private sector, and operational agencies;
  • The creation of robust public–private partnerships, combined with collaboration with think tanks and local networks, is indispensable for sharing knowledge, exchanging good practices, and improving detection capabilities.

Adoption

  • Detection can also be enhanced by adopting technological support tools capable of identifying inconsistencies in video and audio content, enhancing multi-layered verification and more;
  • AI may also be used by reporting entities to improve effectiveness and CDD measures;
  • Prosecutors must also receive specialised training, not only in the use of AI but also in detecting and mitigating AI generated forgeries during fraud proceedings. 

A magnifying glass on a plate

AI-generated content may be incorrect.

Conclusion/Overview

The document underscores the need for enhanced vigilance and continuous innovation. To stay ahead of evolving threats, stakeholders must not only strengthen safeguards but also harness emerging technologies responsibly to reinforce the integrity of the global financial system.

The sector and risk threat is constantly evolving, and so our measures require ongoing review and enhancement based on potential new and emerging typologies. The GSC is committed to being agile and responsive in our approach to understanding risk, and to sharing information with the private sector to ensure a joined-up approach to AML/CFT/CPF.