Connect with us

TECHNOLOGY

Why Explainable AI (XAI) will have a major role in financial services

Why Explainable AI (XAI) will have a major role in financial services 35

By Alexei Markovits, AI Team Manager, Element AI

Ground-breaking advances in artificial intelligence (AI) are changing our world all the time. AI systems are used to trade millions of financial instruments, assess insurance claims, assign credit scores and optimise investment portfolios.

Yet while we may benefit from these advances, we also need a framework that helps us understand how AI arrives at its findings and suggestions. It is essential so we can establish trust and deploy those outputs to their full potential.

The processes behind AI are not always obvious. Many of today’s advanced machine learning algorithms that power AI systems are inspired by the processes of the human brain, but are constrained by their lack of human ability to explain actions or reasoning.

For this reason, an entire research field is now working towards describing the rationale behind AI decision-making. This is known as Explainable AI (XAI). While modern AI systems demonstrate performance and capabilities far beyond previous technologies, practicality and legal compliance can inhibit successful implementation.

For organisations looking to utilise AI effectively, XAI will be a key deciding factor due to its ability to help foster innovation, enable compliance with regulations, optimise model performance, and enhance competitive advantage.

Explainable AI and its value in financial services

In financial services, the techniques of explainability are becoming especially valuable. When it comes to financial data, many service providers and consultants may already be aware of the low signal-to-noise ratio that is typical of this data, which in turn demands a strong feedback loop between user and machine.

AI solutions that are designed without human feedback capabilities run the risk of never being adopted due to the persistence of traditional approaches that rely on domain expertise and experience from years gone by. AI-powered products that are not auditable will simply struggle to enter the market as they’ll face regulation issues.

Marketing forecasting and investment management

Alexei Markovits

Alexei Markovits

Time series forecasting methods have grown significantly across financial services. They are useful for predicting asset returns, econometric data, market volatility and bid-ask spreads – but are limited by their dependence on historical values. As they can lack disparate, meaningful information of the day, using time series to predict the most likely value of a stock or market volatility is very challenging.

By complementing such models with explainability methods, users can understand the key signals the model uses in its prediction, and interpret the output based on their own complementary view of the market. This then enables a real synergy between finance specialists’ domain expertise and the big data-crunching abilities of modern AI.

Explainability techniques also enable human-in-the-loop AI solutions for portfolio selection. An investor might find that they choose not to pick the suggested portfolio with the highest reward if the level of risk appears too great. On the other hand, a system that provides a detailed explanation of the risks, such as how they could be uncorrelated with the market, is a powerful addition to investment planning tools.

Credit-scoring

Assigning or denying credit to an applicant is a consequential decision that is highly regulated to ensure fairness. The success of AI applications in this field depends on the ability to provide a detailed explanation of final recommendations.

Beyond compliance, the value of XAI is seen for the client and financial institution in different ways. Clients can receive explanations that give them the information they need to improve their credit profile, while service providers can better understand predicted client churn and adapt their services.

Through use of XAI, credit-scoring can also help with reducing risk. For example, an XAI model might provide an explanation of why a pool of assets has the best distribution to minimise the risk of a covered bond.

Explainability by design

Since AI solutions are now evolving beyond proof-of-concept to deployment at scale, it has become essential to recognise the importance of prioritising explainability to power human-AI collaboration and to satisfy audit, regulatory and adoption requirements. A user-centric approach, and the imperative for transparency across AI systems together reinforce the need for explainability to be a part of that cycle. All the way from the initial process of building a solution, right to the system integration and use.

Continue Reading
Editorial & Advertiser disclosureOur website provides you with information, news, press releases, Opinion and advertorials on various financial products and services. This is not to be considered as financial advice and should be considered only for information purposes. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third party websites, affiliate sales networks, and may link to our advertising partners websites. Though we are tied up with various advertising and affiliate networks, this does not affect our analysis or opinion. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you, or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish sponsored articles or links, you may consider all articles or links hosted on our site as a partner endorsed link.

Recent Posts