Market Turmoil from Subprime to Jerome Kerveil: Are models letting the industry down?
September 10, 2009

Leonard Matz is International Director of Liquidity and Interest Rate Risk Consulting for SunGard BancWare. He is an author, consultant and bank trainer specialising in risk management and ALM for financial institutions. Previously, he spent five years as a bank examiner and 15 years in various bank management positions. Mr Matz is the author or co-author of numerous risk management and investment books, including: Liquidity Risk Measurement and Management: A Practitioner's Guide to Global Best Practice, Liquidity Risk Management, Risk Management for Banks, Interest Rate Risk Management and Self-Paced Asset/Liability Training. He is a frequent speaker and industry conferences and training programmes around the world.

INTRODUCTION

Are models letting the industry down? Certainly, the unexpectedly large size of recent losses might lead one to think so. Not to mention the fact that the breadth of credit, ops risk and liquidity events has caught so many risk managers by surprise.

No one should have been surprised by the meltdown of US subprime mortgage loans. Few observers in 2005 and 2006 failed to note either the weak underwriting practices or the lack of connection between incentives for originators and those for the ultimate investors.

Few informed observers were surprised. On the contrary, the risks in subprime lending and structured investment vehicles were commonly discussed by bankers and bank regulators in 2005 and 2006. In March 2008, two articles made interesting observations about how known and how knowable these risks were. The Economist described how managers had been trapped by earnings expectations:

‘Indeed, their shareholders would punish them if they sat out the next round — as Chuck Prince let slip only weeks before the crisis struck, when he said that Citigroup, the bank he then headed, was "still dancing". Mr. Prince has been ridiculed for his lack of foresight. In fact, he was guilty of blurting out finance's embarrassing secret: that he was trapped in a dance he could not quit. As, in fact, was everyone else.'1
Echoing this, Michael Lewis also described how such earnings pressures were keeping senior executives from questioning the geese that were laying their golden eggs.2

The surprise, a surprise that truly merits the description ‘shocking', is that so many smart risk managers lost so much in the aftermath.

The subsequent spillover to broader market problems may have been less widely expected but it was no less predictable. Financial market froth, under-pricing of credit risk and low institutional liquidity were clear to all.

But how about the failure of the New York-New Jersey Port Authority auction for high-quality, short-term paper? Or the largest trading loss in modern history — the Jerome Kerveil fraud at Société Générale?

THE PROBLEMS ARE USER ERRORS NOT MODEL ERRORS

But is any of this really a failure of the models? Surely not. There is nothing wrong with the models. Instead, something is wrong with the ways in which they are being used. Three types of user errors need to be addressed.

When all you have is a hammer...

One truism for all solutions is that when all you have is a hammer, every problem looks like a nail. To be blunt, what were people thinking when they applied historical value at risk (VaR) or correlation VaR to structured products that were too new to have traded in non-normal markets? There was a lack of both the volatility history and sufficient information about volatility correlations.

Such applications of VaR were nothing short of foolish. Yet even now there are risk managers defending themselves, saying that, ‘Just because VaR is not the perfect answer doesn't mean we shouldn't use it. How else can we calculate probabilities for loss amounts?' How else indeed?

Of — and ships — and sealing-wax — of skewed distributions and fat tails

Wise risk managers have long realised that ops risk, credit risk and liquidity risk have non-normal distributions. They are highly skewed and have very fat tails. These challenges did not go unmet. Prudent risk managers always stress-tested their VaR measure, and careful risk managers always applied analytical tools to the fat tails. Neither proved sufficient.

For stress-tests, comfort was sought in ‘scenarios' such as a quadrupling of volatility. Quadrupling volatility is an excellent sensitivity analysis but it is hardly a stress scenario. One might argue the case for more extreme tests. For example, the change in energy prices after the collapse of Amaranth, a hedge fund, was a nine-sigma event. ‘If conventional models are correct, such an event [the 2007 jump in the Vix] should not have happened in the history of the known universe … Perhaps modelers do not know the universe as well as they would like to think'.3

Again, these are not unforeseen problems. On the contrary, they are well understood. Advanced techniques such as power laws and extreme value theory can be and are used to tease risk information from fat tails.

The problem with using tools such as extreme value theory to ops risk or liquidity risk is that in almost all banks, almost all the time, there is no distribution of outcomes from risk events, even including the likes of Nick Leeson (Barings Bank) or Jerome Kervail (Société Générale). There is nothing wrong with extreme value theory. But there is clearly something wrong when it is used to look at distributions of ops risk or liquidity risk events. This is like using a magnifying glass and tweezers to seek clues in the upholstery when the evidence is in carpet. In other words, the most precise analysis of the tail of a distribution is not informative if the tail does not include any of the pertinent loss events. Scaling up the observed events is not helpful. These risks are not a matter of scaling. One liquidity expert summed up this issue very clearly when he noted that:

‘The Question is not: "What Risk will we get if we push out the quantiles?" The answer to that question is only a matter of scaling and is therefore meaningless! Instead, the question is: "Is there a structural change that the bank should model?"'4
For ops risk and liquidity risk, at a minimum, risk managers need to put away their advanced maths and ask whether there are structural changes that should be considered.

As the philosopher David Hume pointed out more than 200 years ago: ‘No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion'.5 Events like the trading losses perpetrated by Nick Leeson and Jerome Kervail and some of the bank runs seen in 2007 and 2008 are black swans. The 2006 publication of Nassim Nicholas Taleb's book, The Black Swan: The Impact of Highly Improbable Results, could not have been more timely. No matter how precisely one evaluates improbable events in the tail of any distribution, this gains no insight into black swans because they are not in the tail.

Correlations converge

How do severe credit problems in a relatively small portion of the US financial markets lead to the failure of two small German banks? Forget about whether the blame lies with the loan originators, the rating agencies, the structured product packagers or the buyers. The link is investments in structured products based on subprime loans with more credit risk than the buyers could absorb. Seems clear. OK, then, how do subprime residential mortgage loan losses lead to the failure of the Northern Rock bank? Northern Rock was not brought down by credit risk. For Northern Rock, the link is from subprime loan problems to general market concerns about credit risk to systemic liquidity stress in the banking system to a run on the bank. What about the failure to auction the good-quality, short-term debt of the New York-New Jersey Port Authority? For the Port Authority, the link runs through municipal credit insurance firms that chose to expand into the insurance of structured mortgage product.

The experiences of 2007–08 reveal two key truths. First, the correlations observed in benign markets are far smaller than those observed in stressed markets. Secondly, in stressed markets, seemingly uncorrelated risks are, in fact, correlated. In times of stress, correlations converge.

Neither of those key truths is new nor newly discovered. The same shifts were seen when emerging market problems and the collapse of the Russian ruble in 1998 coincided with the failure of Long Term Capital Management.

No less an authority than Alan Greenspan has recently reminded that: ‘Negative correlations among asset classes, so evident during an expansion, can collapse as all asset prices fall together, undermining the strategy of improving risk/reward trade-offs through diversification'.6

LESSONS LEARNED

The above-mentioned deficiencies can be summarised as follows:
  • applying VaR models to situations for which they are ill-suited;
  • scaling up the quantiles when structural changes should be examined instead; and
  • applying correlations observed in benign conditions to stressed conditions.
All three of these deficiencies are problems in when and how models are used. They are not problems with the models themselves.

For far too many risk modellers, the overwhelming desire for ‘scientific' methods and for probability information drives them to use flawed models as ‘second-best solutions'. The oft-repeated assertion is that flawed VaR risk measures are better than assumption-ridden deterministic model results.

Is that assertion really true? Sometimes it helps to go back to first principles. This is one of those times. Way back in 1956, Richard Lipsey and Kelvin Lancaster published an undeservedly obscure paper describing a general theory of the second best.7 In a nutshell, the general theory of the second best shows that a next-best solution can be better than an optimal solution when jumping to the optimal solution requires including an unobtainable or inaccurate variable.

Historical and correlation VaR are perfectly fine tools. So is extreme value theory. These techniques work very well for assessing some types of risk under some conditions. But when it is not possible to provide VaR models with anything close to optimal inputs, it is time to step back and admit that the second-best solution — no information about the loss probability — may be better than what appears to be an optimal solution.

APPLYING THESE PERSPECTIVES FOR BETTER RISK MEASUREMENT AND MANAGEMENT

If historical VaR, correlation VaR, extreme value theory or power laws are not the right answer, what is? What is needed is carefully designed, deterministic scenarios that reflect structural changes that, in turn, can be used for stress-testing.

Risk managers in the insurance industry have understood this for years. ‘Insurers looking at, say, catastrophic risks have relatively few data points and thus tend to have a healthy skepticism of models. They more often brainstorm their own [deterministic] scenarios'.8

Risk managers can then evaluate the risk estimates from such deterministic stress-tests. For liquidity risk, institution and group-wide limits can be put in place. Best practice for liquidity limits is to focus on forecasted survival horizons for each stress scenario.

Risk controls for investments can be formulated in the same way. Limits on the estimated risk exposures for specific investments can be derived from evaluations of how forecasted values for those investments behave in deterministic stress scenarios.

In the final analysis, the user — not the tool — makes the difference between a job done and a job well done.

References

1. The Economist (2008) ‘Special Briefing: What went wrong', The Economist, 19th March.
2. Lewis, M. (2008) ‘What Wall Street's CEOs don't know can kill you', 26th March, available at: http://www.bloomberg.com/apps/news?pid=email_en&refer=&sid=aSE8yLAyALNQ (accessed 25 March 2009).
3. The Economist, 3rd March, 2007, p. 78.
4. Dr Robert Fiedler, in conversation with the author, 2000.
5. Hume, D., ‘The Problem of Induction'.
6. Greenspan, A. (2008) ‘We will never have a prefect model of risk', Financial Times, 17th March, p. 9.
7. Lipsey, R. G. and Lancaster, K. (1956) ‘The general theory of second best', The Review of Economic Studies, Vol. 24, No. 1, pp. 11–32.
8. The Economist (2008) ‘Net year's model', The Economist, 1st March, p. 81.




This site, like many others, uses small files called cookies to customize your experience. Cookies appear to be blocked on this browser. Please consider allowing cookies so that you can enjoy more content across fundservices.net.

How do I enable cookies in my browser?

Internet Explorer
1. Click the Tools button (or press ALT and T on the keyboard), and then click Internet Options.
2. Click the Privacy tab
3. Move the slider away from 'Block all cookies' to a setting you're comfortable with.

Firefox
1. At the top of the Firefox window, click on the Tools menu and select Options...
2. Select the Privacy panel.
3. Set Firefox will: to Use custom settings for history.
4. Make sure Accept cookies from sites is selected.

Safari Browser
1. Click Safari icon in Menu Bar
2. Click Preferences (gear icon)
3. Click Security icon
4. Accept cookies: select Radio button "only from sites I visit"

Chrome
1. Click the menu icon to the right of the address bar (looks like 3 lines)
2. Click Settings
3. Click the "Show advanced settings" tab at the bottom
4. Click the "Content settings..." button in the Privacy section
5. At the top under Cookies make sure it is set to "Allow local data to be set (recommended)"

Opera
1. Click the red O button in the upper left hand corner
2. Select Settings -> Preferences
3. Select the Advanced Tab
4. Select Cookies in the list on the left side
5. Set it to "Accept cookies" or "Accept cookies only from the sites I visit"
6. Click OK

Leonard Matz is International Director of Liquidity and Interest Rate Risk Consulting for SunGard BancWare. He is an author, consultant and bank trainer specialising in risk management and ALM for financial institutions. Previously, he spent five years as a bank examiner and 15 years in various bank management positions. Mr Matz is the author or co-author of numerous risk management and investment books, including: Liquidity Risk Measurement and Management: A Practitioner's Guide to Global Best Practice, Liquidity Risk Management, Risk Management for Banks, Interest Rate Risk Management and Self-Paced Asset/Liability Training. He is a frequent speaker and industry conferences and training programmes around the world.

INTRODUCTION

Are models letting the industry down? Certainly, the unexpectedly large size of recent losses might lead one to think so. Not to mention the fact that the breadth of credit, ops risk and liquidity events has caught so many risk managers by surprise.

No one should have been surprised by the meltdown of US subprime mortgage loans. Few observers in 2005 and 2006 failed to note either the weak underwriting practices or the lack of connection between incentives for originators and those for the ultimate investors.

Few informed observers were surprised. On the contrary, the risks in subprime lending and structured investment vehicles were commonly discussed by bankers and bank regulators in 2005 and 2006. In March 2008, two articles made interesting observations about how known and how knowable these risks were. The Economist described how managers had been trapped by earnings expectations:

‘Indeed, their shareholders would punish them if they sat out the next round — as Chuck Prince let slip only weeks before the crisis struck, when he said that Citigroup, the bank he then headed, was "still dancing". Mr. Prince has been ridiculed for his lack of foresight. In fact, he was guilty of blurting out finance's embarrassing secret: that he was trapped in a dance he could not quit. As, in fact, was everyone else.'1
Echoing this, Michael Lewis also described how such earnings pressures were keeping senior executives from questioning the geese that were laying their golden eggs.2

The surprise, a surprise that truly merits the description ‘shocking', is that so many smart risk managers lost so much in the aftermath.

The subsequent spillover to broader market problems may have been less widely expected but it was no less predictable. Financial market froth, under-pricing of credit risk and low institutional liquidity were clear to all.

But how about the failure of the New York-New Jersey Port Authority auction for high-quality, short-term paper? Or the largest trading loss in modern history — the Jerome Kerveil fraud at Société Générale?

THE PROBLEMS ARE USER ERRORS NOT MODEL ERRORS

But is any of this really a failure of the models? Surely not. There is nothing wrong with the models. Instead, something is wrong with the ways in which they are being used. Three types of user errors need to be addressed.

When all you have is a hammer...

One truism for all solutions is that when all you have is a hammer, every problem looks like a nail. To be blunt, what were people thinking when they applied historical value at risk (VaR) or correlation VaR to structured products that were too new to have traded in non-normal markets? There was a lack of both the volatility history and sufficient information about volatility correlations.

Such applications of VaR were nothing short of foolish. Yet even now there are risk managers defending themselves, saying that, ‘Just because VaR is not the perfect answer doesn't mean we shouldn't use it. How else can we calculate probabilities for loss amounts?' How else indeed?

Of — and ships — and sealing-wax — of skewed distributions and fat tails

Wise risk managers have long realised that ops risk, credit risk and liquidity risk have non-normal distributions. They are highly skewed and have very fat tails. These challenges did not go unmet. Prudent risk managers always stress-tested their VaR measure, and careful risk managers always applied analytical tools to the fat tails. Neither proved sufficient.

For stress-tests, comfort was sought in ‘scenarios' such as a quadrupling of volatility. Quadrupling volatility is an excellent sensitivity analysis but it is hardly a stress scenario. One might argue the case for more extreme tests. For example, the change in energy prices after the collapse of Amaranth, a hedge fund, was a nine-sigma event. ‘If conventional models are correct, such an event [the 2007 jump in the Vix] should not have happened in the history of the known universe … Perhaps modelers do not know the universe as well as they would like to think'.3

Again, these are not unforeseen problems. On the contrary, they are well understood. Advanced techniques such as power laws and extreme value theory can be and are used to tease risk information from fat tails.

The problem with using tools such as extreme value theory to ops risk or liquidity risk is that in almost all banks, almost all the time, there is no distribution of outcomes from risk events, even including the likes of Nick Leeson (Barings Bank) or Jerome Kervail (Société Générale). There is nothing wrong with extreme value theory. But there is clearly something wrong when it is used to look at distributions of ops risk or liquidity risk events. This is like using a magnifying glass and tweezers to seek clues in the upholstery when the evidence is in carpet. In other words, the most precise analysis of the tail of a distribution is not informative if the tail does not include any of the pertinent loss events. Scaling up the observed events is not helpful. These risks are not a matter of scaling. One liquidity expert summed up this issue very clearly when he noted that:

‘The Question is not: "What Risk will we get if we push out the quantiles?" The answer to that question is only a matter of scaling and is therefore meaningless! Instead, the question is: "Is there a structural change that the bank should model?"'4
For ops risk and liquidity risk, at a minimum, risk managers need to put away their advanced maths and ask whether there are structural changes that should be considered.

As the philosopher David Hume pointed out more than 200 years ago: ‘No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion'.5 Events like the trading losses perpetrated by Nick Leeson and Jerome Kervail and some of the bank runs seen in 2007 and 2008 are black swans. The 2006 publication of Nassim Nicholas Taleb's book, The Black Swan: The Impact of Highly Improbable Results, could not have been more timely. No matter how precisely one evaluates improbable events in the tail of any distribution, this gains no insight into black swans because they are not in the tail.

Correlations converge

How do severe credit problems in a relatively small portion of the US financial markets lead to the failure of two small German banks? Forget about whether the blame lies with the loan originators, the rating agencies, the structured product packagers or the buyers. The link is investments in structured products based on subprime loans with more credit risk than the buyers could absorb. Seems clear. OK, then, how do subprime residential mortgage loan losses lead to the failure of the Northern Rock bank? Northern Rock was not brought down by credit risk. For Northern Rock, the link is from subprime loan problems to general market concerns about credit risk to systemic liquidity stress in the banking system to a run on the bank. What about the failure to auction the good-quality, short-term debt of the New York-New Jersey Port Authority? For the Port Authority, the link runs through municipal credit insurance firms that chose to expand into the insurance of structured mortgage product.

The experiences of 2007–08 reveal two key truths. First, the correlations observed in benign markets are far smaller than those observed in stressed markets. Secondly, in stressed markets, seemingly uncorrelated risks are, in fact, correlated. In times of stress, correlations converge.

Neither of those key truths is new nor newly discovered. The same shifts were seen when emerging market problems and the collapse of the Russian ruble in 1998 coincided with the failure of Long Term Capital Management.

No less an authority than Alan Greenspan has recently reminded that: ‘Negative correlations among asset classes, so evident during an expansion, can collapse as all asset prices fall together, undermining the strategy of improving risk/reward trade-offs through diversification'.6

LESSONS LEARNED

The above-mentioned deficiencies can be summarised as follows:
  • applying VaR models to situations for which they are ill-suited;
  • scaling up the quantiles when structural changes should be examined instead; and
  • applying correlations observed in benign conditions to stressed conditions.
All three of these deficiencies are problems in when and how models are used. They are not problems with the models themselves.

For far too many risk modellers, the overwhelming desire for ‘scientific' methods and for probability information drives them to use flawed models as ‘second-best solutions'. The oft-repeated assertion is that flawed VaR risk measures are better than assumption-ridden deterministic model results.

Is that assertion really true? Sometimes it helps to go back to first principles. This is one of those times. Way back in 1956, Richard Lipsey and Kelvin Lancaster published an undeservedly obscure paper describing a general theory of the second best.7 In a nutshell, the general theory of the second best shows that a next-best solution can be better than an optimal solution when jumping to the optimal solution requires including an unobtainable or inaccurate variable.

Historical and correlation VaR are perfectly fine tools. So is extreme value theory. These techniques work very well for assessing some types of risk under some conditions. But when it is not possible to provide VaR models with anything close to optimal inputs, it is time to step back and admit that the second-best solution — no information about the loss probability — may be better than what appears to be an optimal solution.

APPLYING THESE PERSPECTIVES FOR BETTER RISK MEASUREMENT AND MANAGEMENT

If historical VaR, correlation VaR, extreme value theory or power laws are not the right answer, what is? What is needed is carefully designed, deterministic scenarios that reflect structural changes that, in turn, can be used for stress-testing.

Risk managers in the insurance industry have understood this for years. ‘Insurers looking at, say, catastrophic risks have relatively few data points and thus tend to have a healthy skepticism of models. They more often brainstorm their own [deterministic] scenarios'.8

Risk managers can then evaluate the risk estimates from such deterministic stress-tests. For liquidity risk, institution and group-wide limits can be put in place. Best practice for liquidity limits is to focus on forecasted survival horizons for each stress scenario.

Risk controls for investments can be formulated in the same way. Limits on the estimated risk exposures for specific investments can be derived from evaluations of how forecasted values for those investments behave in deterministic stress scenarios.

In the final analysis, the user — not the tool — makes the difference between a job done and a job well done.

References

1. The Economist (2008) ‘Special Briefing: What went wrong', The Economist, 19th March.
2. Lewis, M. (2008) ‘What Wall Street's CEOs don't know can kill you', 26th March, available at: http://www.bloomberg.com/apps/news?pid=email_en&refer=&sid=aSE8yLAyALNQ (accessed 25 March 2009).
3. The Economist, 3rd March, 2007, p. 78.
4. Dr Robert Fiedler, in conversation with the author, 2000.
5. Hume, D., ‘The Problem of Induction'.
6. Greenspan, A. (2008) ‘We will never have a prefect model of risk', Financial Times, 17th March, p. 9.
7. Lipsey, R. G. and Lancaster, K. (1956) ‘The general theory of second best', The Review of Economic Studies, Vol. 24, No. 1, pp. 11–32.
8. The Economist (2008) ‘Net year's model', The Economist, 1st March, p. 81.