Financial Services, Artificial Intelligence and the future of regulation

By Aleksi Helakari, Head of Technical Office, EMEA - Spirent.

  • 2 hours ago Posted in

The Financial Services sector seems like it would be one of the first to seize Artificial Intelligence and Automation. Technologies like these have great potential to help financial services in a variety of areas, but most importantly, regulation. For this sector, regulation represents multiple huge and complex obligations which often come with punishing fines and penalties for non-compliance. Yet when it comes to innovating in this area, the sector is coming up against some serious obstacles. 

The sector is one of the best regulated in the world. They have to deal with a whole range of legal obligations: Data protection, Know Your Customer (KYC), consumer protection, anti-money laundering (AML), fair competition, open banking, risk management and many more. These may clash or require their own boutique forms of collection and documentation. Similarly, there are a galaxy of different bodies to deal with including industry bodies - like PCI DSS or the UK’s Financial Conduct Authority. Then there are national regulations, imposed and enforced by individual sovereign governments. Financial Services organisations may be subject to many of these if they’re located in different countries. Then there are supranational regulations like the EU’s General Data Protection Regulation which organisations must comply with whether they’re based or even operate in the zone of compliance. 

Due to the multinational nature of many financial services firms - they often find themselves with compliance conflicts in multiple jurisdictions. In 2022, Credit Suisse was handed a £350 million fine from both the Swiss Regulator and the UK’s Financial Conduct Authority for failure to properly document loan approvals provided in a Mozambique corruption scandal. In 2020, Goldman Sachs were issued a $2.9 billion fine by a collection of global regulators, again, for failure to document and flag suspicious transactions related to Malaysian State funds. That same year Swedish and Estonian Regulators fined a Swedish bank nearly €400 million for deficiencies in their AML procedures. 

It’s a lot, to say the least. Given that, it’s quite understandable why the sector wants to use AI and automation to help comply with this metric tonne of rules, laws and regulations. 

The potential for AI and automation in RegTech

The promise of AI and automation for the financial services sector is bold and may solve some dogged problems that the sector has struggled with for years. 

They’ll be able, for example, to use AI models to generate gap analyses to show where their compliance is falling short, or employ automation to bolster their risk scoring and analyses. They’ll be able to introduce continuous and automated monitoring for regulatory reporting requirements. This will be to document and flag transactions as the need arises, providing authoritative logs of activities for later reporting to regulators. The same goes for auditing, which AI and automation will allow them to do automatically, collecting relevant data from documents and automatically generating reports as and when required. 

Similarly, a lot of complex technical investigation which may be required for compliance, can be simplified with the basic use of natural language processing. In the same way that ChatGPT can be used to find and explain potentially complex answers, natural language processing can be used to deal with rapid and multifaceted changes in regulation. For example, it can keep abreast of regulatory changes with automated alerts, suggest relevant corporate policy changes and analyse technical legal texts with comparative ease. 

It will likely be a real coup for Anti-Money Laundering and fraud detection efforts too. Automation will streamline the data collection required for processes like Know Your Customer (KYC) requirements through automated identity verification and background checks. Pattern recognition will be able to learn and spot the patterns of fraud and money laundering while cutting down on false positives brought about by manual investigations. 

AIn’t cheap

Yet there’s a sobering point that needs to be made here: AI is not something you can just bolt on top of existing infrastructure, as nice as that would be. Many organisations - far beyond the financial services sector - believe it is. What they don’t recognise is that it will take a serious restructuring of the way they handle data at some of the most basic levels.

Much regulation in financial services isn't just about driving a particular outcome or behaviour, it's about documenting that outcome has been pursued. As such, regulatory departments need to find, store and integrate tonnes of proof points around transactions, trades and personal information. 

Any RegTech AI will have to reach throughout every part of a financial services organisation in order to collect that necessary data. That is going to require a uniformity of basic data collection practices; something that few financial services organisations maintain currently.

This will likely become more complex the larger and better established a given company is. They may have expanded to different countries, opened up new departments, grown their workforce and so on. In so doing they’ll have developed different practices and maintained an array of legacy technologies all of which collect and handle data in different ways. These processes and technologies often don’t integrate with one another, creating silos of data without any centralised control. There’s also a question of politics too. Teams and department heads become accustomed and attached to their tools, devices and processes and often resist wholesale changes to their well-worn practices. 

Regulating the RegTech

There’s a whole other layer of complexity to add on top of this. AI will soon be subject to its own regulations. The oncoming EU AI Act will compel organisations that use AI to be able to both transparently explain and document the way a given system works and comes to decisions. The most regulated category of AI models are known as high risk and these will likely include many potential AI models within the sector. Applications relevant to the sector - such as KYC or AML - will likely be firmly within that high risk and will likely be subject to thorough checks and in some cases will require a “fundamental rights impact assessment.” Failure to do so, could result in fines of up to 6% of annual revenue.

The benefits of AI and automation are well worth pursuing in the service of compliance for financial services. However, those that do want to take advantage of the undeniable benefits have to think of this as a deeper process than buying a new piece of software. In order to build something new, organisations have to make room for it. Any real attempt at that, will involve tearing out old legacy technology and re-architecting and automating basic processes to clear a foundation upon which an AI can be built. Attempting to avoid or circumvent that basic fact will result in failure, or at best, AI deployments which fall far short of the objectives they were intended for.

By Yiannis Antoniou, Head of Data, AI, and Analytics at Lab49.
By Shaked Reiner, Principal Cyber Researcher, CyberArk Labs.
By James Fisher, Chief Strategy Officer, Qlik.
By Kamlesh Patel, VP Data Center Market Development at CommScope.
By Brandon Green, Senior Solutions Architect & Threat Modeling SME, IriusRisk.