FinTech Magazine September 2025 | Page 126

How should financial institutions approach the challenge of ensuring LLM outputs are accurate, explainable and auditable for regulatory purposes?

Richard Doherty, Wealth & Asset Management Leader at Publicis Sapient Accuracy, explainability and auditability are non-negotiable for regulatory acceptance. Institutions should be building evaluation pipelines alongside model development, where outputs are continuously benchmarked against business logic, edge cases and regulatory requirements.
Retrieval-augmented generation( RAG), fine-tuning on institution-specific data and post-hoc explainability tooling are part of a growing toolkit to ensure LLMs operate within transparent and reviewable boundaries.
The key is implementing continuous monitoring systems that track model performance in production environments, flagging deviations from expected behaviour before they impact business operations or compliance.

“Accuracy, explainability, and auditability are non-negotiable”

RICHARD DOHERTY, WEALTH & ASSET MANAGEMENT LEADER, PUBLICIS SAPIENT
Most importantly, you need comprehensive audit trails that regulators can follow and understand. This means building transparency into your AI systems from the ground up, not trying to retrofit it later.
Simon Thompson, Head of AI, ML and Data Science, GFT LLMs produce long, complex outputs. This is a sharp contrast to traditional ML approaches that create binary decisions or regression. Because of this, it can be very challenging to evaluate them. Another issue is that LLMs are a moving target; API providers are under pressure to constantly innovate, producing new models seemingly every week.
The output of the new model will probably be better than the output of the old model, but it will definitely be different because that difference is the point of the innovation.
One approach is to use LLM output as a booster for human output rather than as the finished product. The human is still the accountable entity, but human productivity can be massively boosted in GFT’ s experience when they are supported by LLM-enabled tools.
This approach also allows other AI methods to be used in concert with LLMs to validate output and provide extra information to the human.
126 September 2025