By Mark Somers, Technical Director at 4most Europe Ltd (www.4-most.co.uk)
Recent economic troubles in most major economies were made worse by the fact that major banks had not fully anticipated the scale of losses that were possible and therefore in many cases they were under-capitalised and under-prepared. The reason for this failure to understand the risks has many varied dimensions including inappropriate market incentives for banks, over-accommodating regulators, politicians with electoral incentives and geo-political transition changing the system in unexpected ways. Whatever the underlying reasons, what is key for banks and regulators, is to understand how to design a stress-testing regime that informs all stakeholders more effectively for the next time. Large financial institutions currently typically have no systematic approach to estimating what would happen in possible but extreme scenarios (e.g. UK exit of EU, virus pandemic, all-out war in Ukraine) or understanding the “boundary surface” of extreme scenarios under which the bank could fail.
Historically the main emphasis of regulators and hence many banks has focused on the “forward” stress testing problem – given a severe situation, typically a recession, what would be the impact on the bank? This can provide useful insights but is fundamentally flawed as an approach for comprehensive risk management:
- Any single scenario has a vanishingly small probability of actually happening and thus provides limited help to mitigate against the multitude of other potential problem scenarios that haven’t been considered
- Often (typically?) surprising events / market dislocations happen together – for example the falling oil price in the second half of 2014 may, in some complex way, be related to the Swiss deciding to de-peg their currency from the Euro. Was this inevitable or a predictable scenario?
- Scenarios that will put institutions under pressure are typically driven by latent weaknesses in the operations of a business that are then triggered by outside events. Starting with the outside events makes it more unlikely to stumble into the outcome that has the most damaging effect.
A helpful mental exercise to help really understand the problem of stress testing is for bank decision makers to consider a deceptively simple hypothetical question – what made my bank go bust today? Clearly during most periods the institution may be in fine health and making profits – but it is only by probing into how, underneath those profits, their business could actually be building a stockpile of unseen liabilities, that decision makers will be able to take effective mitigating actions against in time to avoid the losses crystallising. This view point is typically called “reverse” stress testing and is increasingly a focus for regulators and more sophisticated institutions.
As this is a relatively young field there is no “one” approach that has received general acceptance. The focus is often on qualitative discussions with limited quantitative rigour or pre-defined process and hence low confidence in the outcome. Despite this, some principles to assist in the identification of important factors that need to be modelled can be considered:
- Map concentrations of current profitability – apparently highly profitable parts of a business will be encouraged to grow rapidly. These could point to a stock of latent future liabilities if the valuation model is wrong.
- Identify major trends and how the business has changed to meet them– the trends could reverse or accelerate; are asset valuations based on an implicit assumption of a continuing trend?
- Assess the likelihood of combinations of contingent triggers – across many disciplines dealing with complex systems (engineering, finance, air accident investigations) it is known that it is typically the knock-on effect of a series of related events that compound together to cause really serious problems.
Beyond this guidance there is a need to turn qualitative arguments into a formal mathematical model. Typically evaluations of a scenario start with a narrative, these are not strictly quantitative statements but they encompass the key causal mechanisms that are at play. These narratives can then be turned into what mathematicians call a “Probabilistic Graphical Model” or PGM – a set of nodes and arrows. [see box] PGM’s allow the decomposition of a complex knowledge problem into smaller pieces which are made consistent through probability theory. The technique has already found many applications in medicine, engineering and computer science which are domains notable for the complexity of their problems. Only recently this technique has found its way to risk management and it is in stark contrast with extant modelling techniques whose drawbacks became evident during the great financial crisis.
Probabilistic Graphical Models have a natural application in risk management as they allow the design of forward-looking scenarios based both on prior information and historical data. Crucially they are directly interpretable by senior executives without specialist knowledge ensuring that those in command have a clear view of the battlefield. PGMs require the analyst to lay out the assumptions of how one event is linked to its subsequent dependencies in a way that draws out inconsistencies and challenges each logical step. To make the model a predictive tool it is then necessary to populate each node with conditional probabilities; conditioned on its parents these define the likelihood of each potential outcome. Populating these tables can be based on a number of sources of information: (expert) opinion, historical data or market implied relationships. Once these have been populated it is a relatively simple mathematical operation to interrogate the PGM to find out which scenarios are most likely to drive an extreme loss event, and provide a map of severity vs likelihood that characterises the risk properties of the network.
In practice it is often extremely challenging to know the correct structure to use or the best settings for the conditional probability tables. Often this is cited as a weakness to quantitative approaches (it is not just a weakness of PGMs) – in reality however it is simply an expression of how little we know. Instead a better approach is to understand the sensitivity of the outcome to different structural and parameter assumptions. This can be done formally via a Monte-Carlo simulation.
In summary, stress testing is developing in new directions –
- focusing on external shocks but also increasingly understanding latent internal risks that may be crystallised
- increasingly investigating a larger number of scenarios rather than a few obvious / historical adverse ones
- from using asset correlations to trying to understand and model how causal effects will propagate in times of stress.
Box on PGM
Each node in a PGM represents a measurable quantity and each arrow represents the direction of cause and effect. These diagrams are easily interpreted but of more mathematical interest are their properties for transmitting probabilistic information. Specifically if the state of a node is known then the probabilities of any nodes with connections with it will be updated (in either direction) and this updated information can then be propagated across the rest of the network. A system under sudden change is likely to cause a breakdown of only a handful of key causal relationships. The dependencies of all the other causal relationships however are likely to stay the same. To work out what will happen in the event of an unlikely event we enter that event into the network and disconnect the parent links (assuming the driver is external to the observed factors) – this means all our empirical experience of how the world works is not discarded – only the bits directly connected to the postulated shock need to be re-evaluated.
Acknowledgement: I wish to express my thanks to Les Cantlay for providing useful challenge on the subject area.
About 4most Europe (www.4-most.co.uk)
4most Europe Ltd is a specialist credit risk analytics consultancy with offices in London and Edinburgh. The company provides a range of products and services across credit risk, fraud and pricing, working with blue chip clients predominantly in the retail banking and mobile sectors. The company offers a flexible, competitive model, either working with clients to manage regulatory change or delivering and implementing business critical solutions.
About the author
Mark Somers is Technical Director at specialist analytics consultancy 4most Europe, based in London. The company provides a range of products and services across credit risk, fraud and marketing, working with blue chip clients predominantly in the banking, retail and mobile sectors.