Losing the risk management war
It’s no secret that past risk management practices and regulatory frameworks failed with respect to the global financial crisis. There were a number of reasons behind this, ranging from an overreliance on quantitative analysis to poor risk governance and frameworks, not to mention a lack of understanding around concentrated risk build-up such as leverage, convexity effects and capital. Additionally we are still seeing the outcomes of particular mis-selling events in terms of conduct risk and the lack of focus on interdependences was also a crucial factor. The list of reasons is seemingly endless, writes Richard Bennett.
The reaction to this series of events was the ensuing regulatory tsunami. Across Europe, for example, we have witnessed the introduction of Capital Requirements Directive IV, while new regulators including the European Banking Authority, European Securities Markets Authority and European Insurance and Occupational Pensions Authority were also introduced. This led to new regulation, new models within the regulatory framework and the juxtaposition of existing internal models and how they are perceived.
Take, for example, when the Banking Committee on Banking Supervision looked at reducing excessive variability in banks’ capital ratios part of this process delved into a set of normalised Value at Risk results run by a number of larger banks. These banks were given the portfolios, market data and the time period, and through their internal models – the diagram on the left is a scatter plot of the dispersion of the results. If you imagine for a moment that you are a politician having that dispersion graph explained to you – you may very well have a few concerns.Of course, no one would expect them all to be in line, because in different markets there are different nuances – naturally, risk appetites are going to be different which will affect how internal models are put together, for example. Nevertheless, the dispersion in some of these particular portfolios is rather larger than the politicians would have liked to have seen. This has certainly had an impact on both risk and regulation when it comes to modelling.
Some believe that political imperative has overridden the economic imperative and is driving a lot of what regulators currently think. So maybe we have the same politics playing out here, where the regulators think the dispersion of normalised VaR results are too high. I note at this point that, as part of CRD IV, people who have internal models for both the trading and the banking book are required to submit reports to the regulator with respect to the results.
The idea here is to try and reduce this dispersion, or if it continues then at least the regulators can be aware of it. I think this particular dispersion graphic is driving a lot of what the regulators are thinking at present. Of course it’s not just what the regulator thinks, we have had a few issues over the years that have not been regulatory-driven. We have to think about how we can create better models and also look at movements after a loss event has occurred. Clearly, it is not within the boundaries of what happened in the real world and is rather worrying in terms of how we measure and manage risk.
How did we get here?
The 1995 Barings scandal should be considered a major landmark. It was quite interesting how a series of operational risk failures, in terms of Nick Leeson running the front- and the back-office, led us to – in short order – market risk amendments from the Bank of International Settlements.
Another significant period was when investment banks became concerned about various legal cases relating to derivatives that they had sold to their clients. The result, for JP Morgan in particular, was the creation of a common language, known as RiskMetrics – the variance/covariance model the firm released in conjunction with Reuters. For those firms who were trading normal instruments with not much optionality in normal markets, it was a good start. However, quite quickly people began to realise that some of the assumptions in the model and some of the things it didn’t do meant that the model coverage needed to be expanded. That moved us on to other issues because at the time hardware, although good, couldn’t cope with historical simulation.
As soon as hardware, operating systems and databases improved and firms were able to get to the point of historical simulation – the questions that were then raised were, “how many years back do we need to model for?” and “aren’t we running risk by looking through our rear view mirror?” As a result there was a big debate about whether the industry should use Monte Carlo simulation for market risk.
The Asian crisis then occurred, breaking many associated assumptions and leading to a stress testing framework. From a business point of view, this looked at items that could potentially occur in markets which would be out of the ordinary. These stress tests were executed on a particular day, making sure that firms had enough in terms of reserves to cover themselves.
Having addressed market risk issues, firms then came across internal credit models. This was simulation through time, typically using the Monte Carlo model. It involved taking an instrument, looking at how it simulated through time, looking at credit mitigation from the point of view of netting agreements and the credit support annexes from the point of view of collateral and observing how they mitigated exposure through time. This was effectively a process of running large models on lots of hardware in order to come up with an internal model which was strong enough for the regulator.
In parallel, firms realised that normal distribution wasn’t good enough. Firms went through and are, in fact, still going through expected shortfall, extreme value theory and various other distribution capabilities. But many of these issues came down to pricing assumptions and what was in the models themselves, for instance in recent times the Gaussian Copula model was used extensively for asset-backed securities and collateralised debt obligations. This model worked on the assumption that correlations between different assets were constant over time and could be combined into one number. That perhaps was an assumption that was not particularly realistic, but once again the practitioners were actually using this to make money – and we all know how that turned out.
More recently we have seen risk neutral simulations and are now moving to real world models of how things work. Risk models, pricing models and assumptions have all improved over time and this needs to continue irrespective of whether that is through firms’ internal models or regulators outlining what a standardised model should look like.
Forward-looking analytics
Looking at regulatory trends, forward-looking analytics has emerged as being important. Likewise, the supervisory review from the EBA, BCBS 239 risk data and aggregation and also the funding plans with respect to the EBA all require some form of forward-planning in terms of simulation and stress testing.
Looking at the capital and liquidity planning exercise, it’s important that institutions line up their strategy with their risk appetite, but this is equally as important as having a forward-looking view, including stress scenarios in connection with the overall strategy. They need to ask the questions “is this strategy going to plan out for the next two to three years? And if so, what is the inter-connectivity between the capital and liquidity management?” This is a large area of risk measurement and risk management in terms of how forward-looking simulation is conducted, so it is an important point to bear in mind.
Over in the finance arena, it is not surprising that once again we have lots of risk measurement and management going on within the expected credit loss calculations and all the different stages that are expected within the IFRS 9 loss models.
Intraday liquidity
The intraday side is being driven by BCBS 239, along with the Prudential Risk Authority in the UK outlining its rules on intraday liquidity. Looking at a nostro view of various payments across correspondent banking relationships and your payment relationships across the course of the day, we can all see what’s going on and that has been available to us for some time. It’s not going to be too long before firms actually do scenario and stress analysis on these particular points.
So, we could even look at behaviour risk and actually see how the scenarios of behaviour risk will affect payments coming in on a timely basis. We could also simulate counterparty and credit risk, market risk and also events such as rate resets. We may even get to the point where we look at intraday liquidity at risk and draw a distribution over where the liquidity is and start to measure and manage that as well.
Stress testing
Again, the European Central Bank has its assessments stress test manual – BCBS stress testing. In July the EBA released another paper with respect to who is going to be doing the 2016 stress tests. These will be run on a static database vs. dynamic, which many will still criticise for being unrealistic, however that remains the case for this round of tests.
Stress testing is a core component when we look at strategic planning and forecasting, so firms need to look at forward looking stress tests and how those actually work out. With capital planning we have enhanced stress test scenarios which need to be thought about and how we integrate the risk models between the regulatory and economic perspective.
With liquidity planning we need to look from the funding projection perspective. Stress testing can also inform risk appetite and remains a core component of all these elements. The risk side is clearly embedded within stress testing from a regulatory trend point of view.
Internal model and standardised approaches
In the past six months regulators have come out with standardised approaches for credit risk and counterparty credit risk, as well as allowing internal models to exist as well. Operational risk has just received new approaches, and there is more to be done on securitisation.
In the world of market risk and interest rate risk in the banking book, we can actually see these regulatory trends starting to be turned into papers and regulations that will directly affect everyone. The internal model for the Fundamental Review of the Trading Book, for example, looks at expected shortfall in terms of the distribution, but if you look at the standardised model that will potentially create a lot of changes.
If the regulator focuses on capital calculations based upon the standardised approach, which is what is being talked about at the moment, then that is going to make a huge difference with respect to how market risk is measured by people with the trading book. And I would also expect people with banking books to be drawn into FRTB as well, given the way that the regulations are currently being proposed. A new quantitative impact study for FRTB is due out at the end of the year, with the rules becoming clearer, and I suspect there will be a lot of work involved there. Finally, for interest rate risk in the banking book – there are two approaches there, an internal model type approach and a standardised approach and consultation papers are out on that at the moment.
Overall, regulators continue to talk about simplicity, risk sensitivity and comparability, which pose significant challenges.
The comparability discussion goes back to the dispersion of normalised VaR results – but nevertheless the regulator is definitely looking to try to find ways to compare financial institutions with each other, despite the fact they will have different risk appetites and different views with respect to the market place. This is ongoing work which the regulator is going to be releasing to us. Maybe it’s not one tsunami wave, maybe its many waves of different approaches.
To answer the question posed at the start – in some senses regulation is a help toward risk measurement and risk management. We can all agree that the landscape has fundamentally changed and regulation has helped to drive those changes. But, are there unintended consequences from this? And are the regulators using a big stick and a small carrot at the moment? My conclusion is that this is like a pendulum. It is now swinging away from internal models and quantitative risk towards a more standardised approach in terms of models and measurement. But at some point the pendulum will swing back towards the internal model approach.
The point here is that overall what we should be trying to achieve is, whether it is an internal model or a regulatory model, that we improve the models themselves. We need to have better understandings of the assumptions, and that we can actually measure and manage our business in a better way which informs how risk appetite is actually engaged. Only through adopting such an approach will the strategy of a financial institution be clearly defined.
About the author
Richard Bennett is London-based vice president and head of regulatory reporting, EMEA, at risk and regulatory technology firm Wolters Kluwer Financial Services. Prior to this he held senior management positions at various financial technology companies, including Razor Risk Technologies, CicadaRisk and Algorithmics.