Regulating AI in the financial sector: recent developments and main challenges

AI

Financial institutions have been using artificial intelligence (AI) for many years. Three AI use cases are worth highlighting: customer support chatbots; fraud detection, including for purposes of anti-money laundering and combating the financing of terrorism (AML/CFT); and credit and insurance underwriting. Use of AI for chatbots and fraud detection is not new, but the technology has significantly improved in recent years. In terms of credit and insurance underwriting, financial institutions are increasingly using AI for, among others, credit scoring, valuation of collateral and assessing unstructured information from multiple sources to more accurately predict insurance risks and set premiums. 

To address AI-related risks, international and national authorities have introduced (cross-) sectoral AI-specific guidance. This guidance outlines policy expectations around common themes. These include reliability/soundness, accountability, transparency, fairness and ethics. More recent guidance has placed increased emphasis on data privacy/protection, safety and security. With the increasing attention on gen AI, sustainability and intellectual property are also being covered in the latest AI guidance. These themes are interconnected and there may be trade-offs between them when developing or upgrading AI guidance. Regardless, the guidance generally allows for a proportionate or risk-based approach to the application of the policy expectations. 

This paper has been produced by the Bank for International Settlements (BIS).