The Financial Action Task Force (FATF) has recently released a horizon scan document regarding the impact of Artificial Intelligence (AI) and Deepfakes on Money Laundering, Terrorist Financing and Proliferation Financing (ML/TF/PF). This report highlights current and emerging AI risks and vulnerabilities through an AML/CFT/CPF lens.
The document aims at raising awareness among key stakeholders, namely national authorities, Financial Institutions (FIs), Virtual Asset Service Providers (VASPs), Designated Non-financial Businesses and Professions (DNFBPs) to support stronger regulatory and operational responses to the associated ML/TF/PF risks.
The full document contains typologies and flags relevant to the sector, which are not included in this summary. As such, Operators are urged to review the document and consider all relevant typologies/emerging risks as part of their ongoing risk management framework. However, this summary provides some key takeaways from the document. The GSC will continue to share relevant documents such as this with stakeholders in line with its key primary objectives.

AI and Deepfake technology can be used to impersonate individuals, spread misinformation, facilitate fraud and other illicit activities. Deepfakes are increasingly prevalent in recent years and can be used to bypass traditional AML/CFT/CPF controls and manipulate systems with alarming ease to commit ML/TF/PF and predicate offences such as fraud.
These technologies are being used increasingly used in complex cyber-enabled fraud schemes, phishing attacks, financial exploitation of vulnerable groups (such as the elderly), online romance scams, and online child sexual exploitation. By exploiting the ability to disguise, manipulate, and anonymise identities, criminals are increasingly using deepfakes to expand the complexity, scale, and reach of their operations. The Document highlights that risks from deepfakes are amplified in three key areas:
Widespread adoption of facial recognition and video-based KYC creates opportunities for deepfake manipulation.
Many AML systems remain ill-equipped to detect synthetic content, and compliance frameworks have yet to address these vulnerabilities.
The interconnection of global financial systems complicates digital identification or the acceptance of remote identity verification, allowing criminals to exploit weaknesses in Anti-Money laundering regimes.
The rapid evolution of AI deepfakes has created a technological “arms race”. To effectively counter the growing threat of AI-enabled deepfakes, financial institutions and law enforcement agencies must not only strengthen detection capabilities but also increase their understanding of the technology and embrace AI as a proactive tool. The document suggests the following good practices for the public and private sectors:

The document underscores the need for enhanced vigilance and continuous innovation. To stay ahead of evolving threats, stakeholders must not only strengthen safeguards but also harness emerging technologies responsibly to reinforce the integrity of the global financial system.
The sector and risk threat is constantly evolving, and so our measures require ongoing review and enhancement based on potential new and emerging typologies. The GSC is committed to being agile and responsive in our approach to understanding risk, and to sharing information with the private sector to ensure a joined-up approach to AML/CFT/CPF.