Why Explainable AI (XAI) will have a major role in financial services
Published On :
By Alexei Markovits, AI Team Manager, Element AI
Ground-breaking advances in artificial intelligence (AI) are changing our world all the time. AI systems are used to trade millions of financial instruments, assess insurance claims, assign credit scores and optimise investment portfolios.
Yet while we may benefit from these advances, we also need a framework that helps us understand how AI arrives at its findings and suggestions. It is essential so we can establish trust and deploy those outputs to their full potential.
The processes behind AI are not always obvious. Many of today’s advanced machine learning algorithms that power AI systems are inspired by the processes of the human brain, but are constrained by their lack of human ability to explain actions or reasoning.
For this reason, an entire research field is now working towards describing the rationale behind AI decision-making. This is known as Explainable AI (XAI). While modern AI systems demonstrate performance and capabilities far beyond previous technologies, practicality and legal compliance can inhibit successful implementation.
For organisations looking to utilise AI effectively, XAI will be a key deciding factor due to its ability to help foster innovation, enable compliance with regulations, optimise model performance, and enhance competitive advantage.
Explainable AI and its value in financial services
In financial services, the techniques of explainability are becoming especially valuable. When it comes to financial data, many service providers and consultants may already be aware of the low signal-to-noise ratio that is typical of this data, which in turn demands a strong feedback loop between user and machine.
AI solutions that are designed without human feedback capabilities run the risk of never being adopted due to the persistence of traditional approaches that rely on domain expertise and experience from years gone by. AI-powered products that are not auditable will simply struggle to enter the market as they’ll face regulation issues.
Marketing forecasting and investment management
Time series forecasting methods have grown significantly across financial services. They are useful for predicting asset returns, econometric data, market volatility and bid-ask spreads – but are limited by their dependence on historical values. As they can lack disparate, meaningful information of the day, using time series to predict the most likely value of a stock or market volatility is very challenging.
By complementing such models with explainability methods, users can understand the key signals the model uses in its prediction, and interpret the output based on their own complementary view of the market. This then enables a real synergy between finance specialists’ domain expertise and the big data-crunching abilities of modern AI.
Explainability techniques also enable human-in-the-loop AI solutions for portfolio selection. An investor might find that they choose not to pick the suggested portfolio with the highest reward if the level of risk appears too great. On the other hand, a system that provides a detailed explanation of the risks, such as how they could be uncorrelated with the market, is a powerful addition to investment planning tools.
Credit-scoring
Assigning or denying credit to an applicant is a consequential decision that is highly regulated to ensure fairness. The success of AI applications in this field depends on the ability to provide a detailed explanation of final recommendations.
Beyond compliance, the value of XAI is seen for the client and financial institution in different ways. Clients can receive explanations that give them the information they need to improve their credit profile, while service providers can better understand predicted client churn and adapt their services.
Through use of XAI, credit-scoring can also help with reducing risk. For example, an XAI model might provide an explanation of why a pool of assets has the best distribution to minimise the risk of a covered bond.
Explainability by design
Since AI solutions are now evolving beyond proof-of-concept to deployment at scale, it has become essential to recognise the importance of prioritising explainability to power human-AI collaboration and to satisfy audit, regulatory and adoption requirements. A user-centric approach, and the imperative for transparency across AI systems together reinforce the need for explainability to be a part of that cycle. All the way from the initial process of building a solution, right to the system integration and use.
Wanda Rich has been the Editor-in-Chief of Global Banking & Finance Review since 2011, playing a pivotal role in shaping the publication’s content and direction. Under her leadership, the magazine has expanded its global reach and established itself as a trusted source of information and analysis across various financial sectors. She is known for conducting exclusive interviews with industry leaders and oversees the Global Banking & Finance Awards, which recognize innovation and leadership in finance. In addition to Global Banking & Finance Review, Wanda also serves as editor for numerous other platforms, including Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune.
-
-
BUSINESS4 days ago
Financial Services Outsourcing Philippines: Cynergy BPO’s take on Technology, CX, and Regulatory Compliance
-
-
-
NEWS4 days ago
Bronze statues, coins and ancient eggs found in Tuscan thermal baths
-
-
-
NEWS4 days ago
European Commission favours more EU funds for electric vehicles sector
-
-
-
NEWS3 days ago
Airbus cuts 2,000 Defence and Space jobs, taming earlier plans
-