Connect with us
Finance Digest is a leading online platform for finance and business news, providing insights on banking, finance, technology, investing,trading, insurance, fintech, and more. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

TECHNOLOGY

Alexey Utkin, Principal Solution Consultant, Finance Practice at global tech consultancy DataArt

If you are exposed to even a tiny bit of media, you are likely to have heard stories about how AI – artificial intelligence – is here and set to transform industries across the board. And it is true that we are certainly seeing the dawn of new AI summer.With an ever-growing number of practical applications for deep learning, many enterprises now realise they need to consider AI and ML seriously. The technologies supporting AI/ML have advanced rapidly over the past few years and with that, the amount of data to be analysed has grown tremendously and will only continue to do so.

One of the fundamental problems of modern AI/ML models is that it’s hard, often impossible, to explain why a model is doing what it’s doing. In this way, the models are similar to a human brain. The MIT Technology Review posits in its article “The Dark Secret at the Heart of AI” that no one truly knows how the most advanced algorithms work, which is a looming issue for the widespread adoption of AI technologies. (https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/).

So, what are the underlying explainability challenges and the role of domain knowledge for applying AI/ML (machine learning) in enterprise, particularly in a regulated industry, such as finance? This is a vital consideration for those who are considering making use of these new powerful technologies to drive their companies’ products, services and processes.

Explainable AI (XAI) is supposedly a transparent artificial intelligence (AI) whose actions can be relied upon and comprehended by humans. It is held up as different to “black-box” technology – the type of AI which does its own thing, and no human quite understands how it got there.

Explainability is, however, often illusory in practice. Explanations, like the models themselves, are simplifications of reality and hence are never perfect. The financial sector holds huge datasets and the complex maths models used take a lot of time, effort, domain knowledge, skill, and brain capacity to be adequately understood by a human. I remember a case in my practice where the FCA was auditing a credit risk model of an investment bank that had been produced by one of the bank’s top quants.  The model was documented with full scientific diligence but the auditor struggled to fully comprehend the model due to the levels of embedded stochastic maths and ended up performing black-box scenario tests.

Financial markets are full of noise and complexities that are rarely comprehensible by a human. Michael Lewiss howed in his book The Big Shorthow it took a rare human, Michael Burry, to connect the dots in the financial markets and understand what was really going on with subprime credit derivatives. InFlash Boys, he showed how people gotlost in the fine-grained microsecond-level picture of the market mechanics. 

Why is Explainable AI needed in enterprise at all?

Because  organisations seek  increased trust and transparency and effective operational control.  And, along with a raft of other benefits, there is the inescapable fact that neural networks are fallible.

Increased trust and transparency

Explainability of AI is fundamental for people to trust, manage and use AI models. It is important because, at least at its current stage, AI is a great tool to be leveraged by humans, rather than a replacement for a human being.

In regulated industries like Finance, an explanation request is often simply a demand from regulators in the best interest of customers and investors. Regulators are still scratching their heads on how to adapt to the new AI reality. One could argue on the one hand that regulators should be exploring black box simulation statistical testing and reinforcement learning techniques to validate that what models and machines are doing is in line with customers and investors interest and not dangerous for the markets,and so on. But on the other hand regulators are right to challenge the industry to take a more responsible approach while using AI in their products and services and to be mindful of AI ethics and current limitations. For example, among new regulations in this space, the European General Data Protection Regulation (GDPR) that took effect in May 2018 contained what has been labelled as a “right to an explanation”, and states that important decisions significantly affecting people cannot be solely based on a machine decision.

The Financial Stability Board (FSB) in its thorough review of the AI/ML in financial services (http://www.fsb.org/wp-content/uploads/P011117.pdf) also mentioned the lack of AI interpretability as a challenge for supervision and a challenge for the understanding of susceptibility to systemic shocks of the financial system and changes in market conditions.

Deeper control and understanding

When running an organisation it is important to have a strong understanding and control of how the business operates, how the business is run and how customers are served. If an organisation relies on AI for any part of that–it is imperative to understand how that part works, and why the technology makes decisions, suggestions, and predictions. This plays a big role in integrating AI into the enterprise, achieving an effective collaboration of humans and machines, providing better customer experience. This is linked to a growing discussion of AI ethics – while building AI/ML models organisations need to make sure the models are not biased, do not create self-reinforcing loops, that they do not discriminate or are in anyway unfair to a group of people.

An example of this could be a credit scoring model, which is trained using a dataset that includes people’s postcodes and does not explicitly include any race information. Yet, with a postcode being a potential predictor of a subject’s race,there’s a possibility that the model could be susceptible to unintentional racial discrimination.

Drive for further improvement

Explainability or interpretability of a model is likely to open actionable ways to further improve the model. The machine learning engineering team can therefore better understand model biases and work to eliminate them in the product. It is important to understand that machines learn from vast datasets and these datasets may not reflect the real world and may be biased in the unexpected ways. for example, if the data already reflect certain organisational processes and decisions or just simply miss an important part of the data.

The level of explainability depends on which  particular model you choose to use. Certain machine learning models can explain,whereas other models, such as deep neural nets are unexplainable at this stage. The general rule seems to be that there is an inverse correlation between explainability and accuracy the more explainable, the less accurate.Added to this, the more explainable the model, the more effort required from a machine learning engineer to design the model, choose features and transform data in order to achieve a level of good accuracy. Indeed, to some extent, the more opaque and accurate the model is, the more data required. So, deep neural nets, being at the unexplainable side of the spectrum, require the largest amount of data and computing power to train, yet may offer higher accuracy and often can give a good result without much of the feature engineering effort.

Are domain experts necessary for the process?

There is no doubt that over the past few years there has been a breakthrough in applying deep learning to various challenging problems, particularly in computer vision, speech and text analytics fields. Recent advances in deep learning were largely enabled by the increase of computing power (cloud/GPU) and data available for model training. Another exciting aspect of recent achievements of the deep learning models is that many of the breakthroughs required very little domain knowledge.

The DeepMind’s AlphaZero – an engine which at the end of 2017, defeated world-champion programmes in Go, chess, and shogi. Unlike the historical approaches to chess computers, AlphaZero learned by self-play training, not using any input about human games – i.e. without human domain knowledge input.

Numerai, a new type of a hedge fund, runs a data science competition on stock market predictions using completely anonymised stock market datasets. Partially this is done due to the Numerai confidentiality constraints, but also in an effort to remove any bias in human perception. Data scientists build models to find patterns in the data without really knowing what the data is, and consequently,without any possibility to use domain knowledge. Successful models are then tested on a real market and become part of the Numerai investment strategy.

There is certainly an exciting promise for deep learning, but possibly too good to be true.

Indeed, there are many cases when deep learning initially provided good accuracy without much configuration of a domain expert or ML engineer. But most of the time, the initial model was significantly improved, in terms of both performance and accuracy, by efforts of ML designers, ML engineers and citizen data scientists with the domain expertise. Feature engineering and model architecture is where domain expertise is invaluable. For instance, decomposition of the overall model into few sub-models aimed for specific subtasks, choosing the optimal number and size of neural net layers for each sub-model and selecting learning approaches.

For practical implementation of ML-based systems in the enterprise it helps to have an idea about the boundaries of the current abilities of various ML models in different tasks.

In a recent episode of the O’Reilly Data Show (http://radar.oreilly.com/tag/oreilly-data-show-podcast) on Enterprise AI, Kristian Hammond and Ben Lorica proposed that there are three general stages or areas of AI processing – sensing (gathering data), perception (recognition, classification) and reasoning (making a decision or suggestion). In the enterprise setting, sensing is often out of scope as the data already exists. Perception is where deep learning is particularly powerful. And reasoning is an area where deep learning actually struggles or is merely impractical to implement, requiring too much data and time to learn. The reasoning is also often closely related to the goals of the business process, and such goals are far better to be explicitly defined by a domain expert, rather than be inferred from data.

To summarise,the optimal approach is a practical hybrid approach in an enterprise setting. In the hybrid approach deep learning is used for low-level processing – recognition and classification, and then it provides input to a higher-level reasoning model, like a decision tree with business rules. In this way, one achieves good explainability of the overall system.

In practice this may unfold in the following way: you start with a deep neural net that works for your needs, and then start evolving a more interpretable ML model like a decision tree or a RuleFit, as the backbone for your overall model. Using deep neural net outputs as an input for the interpretable model and aiming not to significantly decrease overall accuracy from one step to another. Domain experts and data scientists work together to evolve the decision trees, or other explainable models, with clearly stated business rules and goals, as well as overall ML model architecture to gradually improve accuracy.

Continue Reading

Recent Posts