Modern financial regulation has increasingly come to rely upon the imposition of capital standards. To some degree capital regulation has become synonymous with prudential regulation. At its most basic level, capital, in the form of shareholder equity, provides a cushion to cover unexpected increases in liabilities or decreases in asset values.

Creditors, including policyholders in the case of insurance, could of course absorb unexpected losses, but for a variety of reasons corporate shareholders, who are generally well diversified, are usually considered in the best position to carry that risk — at least in terms of being in a first-loss position.

Naturally a substantial amount of prudential regulation could go away if only governments were willing to impose losses on creditors in the event of insolvency. Such would also have the benefit of increasing the overall monitoring of financial institutions since creditors would now have more powerful incentives to do so. A credible resolution regime can assist in this regard, but ultimately government officials must commit not to, or be prohibited from, rescuing creditors.

Historically capital or solvency standards were either vague or were based upon simple leverage ratios. For example a simple leverage ratio could take the form of 5 percent of total assets. Capital can also, in part, take the form of a fixed minimum level of owner equity.

For instance insurance companies typically are required to maintain $2 to $3 million in minimum capital regardless of activity levels.1 All U.S. states require capital in excess of these minimum as determined by risk-weighting. The National Association of Insurance Commissioners (NAIC) developed model risk-based capital standards for life-health and property-liability companies in the early 1990s. Specific risk-based standards were further developed for health insurers in the late 1990s. Federal bank regulators embraced a similar approach under the Basel Accords.

Risk-weighted capital standards begin from the perhaps obvious observation that not all assets and liabilities are equally risky. NAIC’s model formula for company risk includes the following elements: 1) interest rate risk; 2) asset credit risk; 3) pricing/​underwriting risk; 4) business risk and 5) off-balance sheet guarantees.

In addition to setting capital standards, risk models can also be used for projecting cash flows (liquidity), calculating the fair value of liabilities, pricing of risk (and hence premiums), as well as general business planning. Although the discussion here will focus on capital standards, the presented concerns apply to these other areas as well.

Regulatory risk models are basically computer algorithms that estimate potential outcomes.2 The more complex models may also offer a range of probabilities for those outcomes. The goal of these models is to help calculate the capital that would be required to avoid insolvency, with a given degree of certainty.

These models can also be reverse engineered, in the sense that a target probability of failure can be chosen first, which then allows an estimated level of capital to be calculated to achieve the target probability of insolvency. As an aside, few, if any, policymakers or industry participants would advocate a target probability of failure of zero. Embedded in current capital risk models is some low, but positive, probability of failure. A zero probability is likely both impossible and extremely costly.

If a risk model is wrong, which is certain to be the case to some degree, then financial companies could be at a much greater risk of failure than recognized or they could misallocate excess capital. There are at least three general avenues by which a risk-based capital model can be “wrong.”

The first of these errors is the model itself. A model is a simplified version of reality. Does the simplification eliminate the actual drivers of failure? Is the model, even when simplified, a reliable guide to reality? For instance in the area of mortgage regulation, it is well established that borrower credit and loan-to-value are the predominate drivers of residential mortgage default; yet the promulgation of Dodd-Frank’s qualified residential mortgage rule abandoned these factors for less predictive, yet more politically acceptable, drivers of default.3 A similar model choice was incorporated in the treatment of sovereign default in the Basel II framework.

It had been well understood before the Euro crisis that Greece had a long history of serial default, yet regulators choose to assign a zero-risk weight to Greek sovereign debt. Put simply, regulators in these instances chose models of financial risk they knew to be faulty. Perhaps more troubling is that many regulatory models are flawed but in a manner not widely recognized: Donald Rumsfeld’s “unknown unknowns”.

While we can never really know if a particular model is “correct,” we can gauge the assumptions behind that model. Those assumptions are likely to be built upon other models as well as observations of past data.

One of the biggest errors going into the crisis was the widely used assumption that loss probabilities were “normally distributed,” that is they followed the well known “bell shape” probability distribution. It has long been recognized that the normal distribution has “thin” tails, which assigns a relatively low probability to extreme observations. Distributions with “fatter” tails have also long been understood to better represent financial markets.

The choice of a normal distribution was made not for the sake of accuracy but for the sake of tractability. Normal distributions have a number of characteristics that make them useful in a manner that other distributions are not.

Quite simply, going into the crisis certain model characteristics were chosen because they yielded unique, but inaccurate, solutions; whereas more accurate assumptions would have offered less “useful” solutions.

Even if one has developed a model that includes the appropriate variables, both the impact of those variables and the relationship between those variables could be inaccurate. For instance one can look at mortgage defaults as a function of house prices.

We could represent such in function form by f(D)=B*P where D is defaults, B is the relationship of defaults to prices and P is prices. It was well understood before the financial crisis that house prices influenced defaults, so that model was sound. What was not sound was the assumed relationship B. The sensitivity of mortgage defaults to house prices was significantly underestimated prior to the crisis.

As a result financial institutions held far less capital behind residential mortgages than was needed. One reason this relationship was misunderstood was the lack of sufficient data on subprime mortgage performance. Extensive data on prime mortgage data goes back several decades, but for subprime only goes back to the early 1990s. Analysts commonly used estimates derived from prime mortgages in their calculations of subprime performance.

These estimates proved wildly inaccurate. But since they gave what at the time appeared to be reasonable results, they were widely accepted.

The relationship between variables is also a crucial component of financial modeling. Many analysts believed that mortgage defaults would have little correlation across U.S. states. History had suggested that Texas or New England could have a contained housing boom and bust. Yet, as we painfully learned, defaults can be highly correlated. With the expansion of subprime lending, mortgage defaults because increasingly sensitive to national economic trends. Losses across insurance lines may also be more highly correlated than is recognized. Large natural disasters can result in both life and property claims.

Getting the correlations between variables wrong is one of the most significant flaws in modeling. Of course, as we learn in Finance 101, holding a portfolio of different assets, of varying risk, can result in a portfolio with a total risk lower than that of its parts. A risk model that does not appropriately recognize the benefits of asset (and liability) diversification can result not only in excessive levels of capital but also serve as a deterrent to diversification, ultimately increasing, rather than reducing, risk.

We are all familiar with the phrase “garbage in, garbage out” reminding us that even the best theoretical model is dependent upon the quality of data for its predictive power. An example of such is that a significant portion of mortgages on second homes were coded as being for a primary residence.

Mortgage defaults rates are considerably higher for investment and vacation properties. If mortgages on those properties are instead believed to be for primary residences, whether due to fraud or neglected reporting, then expected defaults will be underestimated. Similarly if loan-to-value calculations do not include second mortgages, default rates will also be underestimated. Such problems are not limited to mortgages.

For instance, the flood maps used in the National Flood Insurance Program have long-standing flaws, including being out-of-date. An overreliance on such maps can lead to estimates of actual flood risk that are grossly inaccurate for both individual properties and the program as a whole.

A model can also be flawed in the choice of a risk measure. A commonly used measure of risk in banking is Value-at-Risk (VaR). A VaR measure is generally reported in terms of X days out of 100. For example a 99 percent VaR attempts to measure the worst loss than can be expected on the best 99 out of a given 100 days. Obviously this measure tells us nothing about that 1 day out of 100 that could sink the firm. In other words, VaR by design ignores tail risk.

This design flaw is made all the worse if the VaR measure is based upon a normal distribution which underestimates financial tail risk. A parallel in the insurance world would be extreme natural disasters, such as Hurricane Katrina. The approach of VaR is essentially to say, what’s the worst risk we face if we assume no Katrinas, Sandys or 9/​11s. Obviously such an approach leaves a company (and industry) quite vulnerable to such tail risks.

None of this is to say we should abandon models. Regulators, academics risk managers and executives should continue to improve our current models. A rare silver lining of the 2008 crisis is that we now have a solid inflection point with which to stress financial models. But we must also not let models substitute for common sense and judgment.

A model that cannot be explained to executives, market participants and regulators will be of limited value. We must also be cautious that a handful of models come to dominate the financial services industry.

Such could easily result in the herding of institutions into similar assets, exaggerating the damage of fire sales.4 A robust financial system is one with a great diversity of business models and balance sheets. The current obsession with financial models, as demonstrated by the Federal Reserve’s stress tests, runs the very real risk of undermining financial stability, rather than improving it.

Again, the point is not to stop modeling risk. We all, either implicitly or explicitly, make decisions based upon some “model” of the world. The point is to approach such models with considerable skepticism and modesty.

ENDNOTES:

  1. For comparisons of minimum capital standards by state, see: http://​www​.naic​.org/​d​o​c​u​m​e​n​t​s​/​i​n​d​u​s​t​r​y​_​u​c​a​a​_​c​h​a​r​t​_​m​i​n​_​c​a​p​i​t​a​l​_​s​u​r​p​l​u​s.pdf
  2. For a summary of how internal models are used in insurance regulation, see: http://​www​.naic​.org/​c​i​p​r​_​n​e​w​s​l​e​t​t​e​r​_​a​r​c​h​i​v​e​/​v​o​l​9​_​i​n​t​e​r​n​a​l​_​m​o​d​e​l​s.pdf
  3. See Floros and White. 2014. Qualified Residential Mortgages and Default Risk. http://​papers​.ssrn​.com/​s​o​l​3​/​p​a​p​e​r​s​.​c​f​m​?​a​b​s​t​r​a​c​t​_​i​d​=​2​4​80579
  4. See Kevin Down, Math Gone Mad: Regulatory Risk Modeling by the Federal Reserve. Policy Analysis. Cato Institute. September 2014. https://​www​.cato​.org/​p​u​b​l​i​c​a​t​i​o​n​s​/​p​o​l​i​c​y​-​a​n​a​l​y​s​i​s​/​m​a​t​h​-​g​o​n​e-mad