Market Tremors — Chapter 1

Market Tremors - The Book
42 min readJan 24, 2021

--

The following excerpt is from the upcoming book “Market Tremors: Quantifying Structural Risks in Modern Financial Markets” by Hari Krishnan and Ash Bennington.

“People who count their chickens before they are hatched act very wisely because chickens run about so absurdly that it’s impossible to count them accurately.”
— Oscar Wilde

As we look out across the spectrum of global markets in early 2021, there are no visible signs of overt distress. In fact, we see the opposite: many markets appear ‘Zombified’ — buoyed by Central Bank intervention while saddled with astronomical levels of public and private debt as yields remain pinned to the zero bound. Meanwhile, many veteran investors are bewildered by asset prices that no longer seem linked to traditional valuation metrics. On a nearly continuous basis, the high priests of finance try to justify the most recent rally on financial news networks to a growing legion of benumbed investors.

Against this surreal but seemingly benign financial backdrop, the authors of this book find themselves wrestling with several recurring questions: Are there circumstances where market volatility is low and sticky while a rising danger lurks beneath the surface? Can we identify structurally weak asset classes where a small price shock can spiral into a major sell-off? If so, are there inexpensive ways to defend against price meltdowns and liquidations before they occur in markets?

As we will discover in the chapters that follow, the answer is a qualified “Yes!” There are many situations where investors can improve upon standard risk estimates, based on their knowledge of major players in a given market and how they are likely to act. In service of that goal, this book is intended for readers who wish to understand and profit from situations where risk is rising in a financial network while credit spreads and realized volatility remain low.

To begin this journey, it is worth reflecting upon the credit cycle. It is widely understood that leverage and volatility tend to move in opposite directions in the later stages of the credit cycle. Leverage is high, yet equity prices and credit spreads are stable. A large amount of cash and credit is available and can be deployed into the equity and corporate bond markets. Investors have the firepower to “buy the dips”, which dampens downside volatility until the cycle breaks. Historically, the US credit cycle tended to last six to eight years, measured from peak to peak. We could say with some degree of accuracy where we were in the cycle. Asset booms and busts were somewhat predictable, given that they corresponded to peaks and troughs in the quantity of credit available. Since 2008, however, this template has been altered by Central Banks, who now seem to equate economic stability with low asset price volatility. The expansionary phase of the current credit cycle has become extremely long in the tooth, given the ever increasing presence of the Fed.

While credit expansion usually has a stabilizing impact on asset prices, even that stability has a limit. If a large enough price shock occurs, leveraged agents will be forced to liquidate their positions as they get hit by margin calls and breach their risk limits. In recent years, banks and prime brokers have become increasingly risk averse. Brokerage houses set tighter position limits for their clients than before. This has important implications. An initial wave of selling can easily cause a cascade of forced liquidations, as other investors have to cut their positions as they plunge through their loss limits. Within a Zombified market, prices can plunge very rapidly, at least in the short term. The Covid-19 induced sell off in February and March 2020 started from a recent high in the S&P 500 and a very low volatility base.

‘Zombification ‘ of Modern Markets

Since the end of the Global Financial Crisis (GFC), the tendency for volatility and leverage to move in opposite directions has become even more extreme — as leverage rises, volatility declines in markets awash in liquidity. (Note that this stylized fact has not strictly applied to global equity markets in 2020, but was largely the case in the previous decade.) Before the GFC, the Fed’s balance sheet was just under $900 billion; at the time of this writing, in early 2021, the Fed’s balance sheet has ballooned to over $7 trillion. The quantity of debt is now larger than ever — and yet the volatility of most asset classes has been persistently low.

This low level of volatility may seem puzzling since leverage is risk, in a certain sense. (Without leverage, by definition, there could be no defaults or margin calls.) We now find ourselves in an environment of structurally low volatility across asset classes, bloated balance sheets, and negative yields. Bank deposits provide what is essentially a 0% return to savers, forcing investors to look to other riskier investments for yield. Not suprisingly, long positioning in risky strategies has become over-extended because of the lack of suitable investment alternatives. This “volatility paradox”, where market fragility is high, but overall volatility is low, has become a persistent feature of modern markets. Historically, this paradox has been restricted to the later stages of the economic cycle, where it creates a toxic blend of plentiful credit and investor complacency.

As an example, we can examine some of the forces at play in a simplified version of a housing bubble. During the bubble, homeowners often borrow more money per dollar of equity, causing aggregate loan-to-value (LTV) ratios to rise. This type of borrowing is a function of market sentiment: investors and lenders are convinced that prices will continue to go up, so they borrow and lend more. This is based on the dangerous assumption that higher housing prices in the future will push LTV ratios back down to more reasonable levels. Everyone seems to make solvency assumptions based on extrapolations from recent returns — and seem dangerously unaware of the risks inherent in the broader debt cycle.

We can think of this problem a bit more mechanistically. Easy credit generally increases the aggregate demand for assets. As a consequence, a fresh supply of new participants enters the market and bids up asset prices, because they fear missing out on the price gains as prices heat up. This liquidity, provided by the new buyers, dampens downside volatility. As the rally continues, it becomes possible to borrow even more, given the rising value of the underlying collateral. It is worth observing that we live in a world where lending has become increasingly collateralized. The process becomes an archetypal, positive feedback loop.

A model describing precisely this phenomenon has been developed by Thurner ( ) and others. Minsky ( ) was one of the earliest academics to identify the problem. The volatility paradox arises as a function of the feedback between prices, risk appetite, and access to credit. While “average” returns are compressed into a relatively narrow range, extreme event risk grows ever larger. With enough leverage in the system, even a moderate-sized sell-off can wash a large number of over-leveraged investors out of the market. Ultimately, the sell-off can cause a nasty chain of further selling — and a potential crash.

The Challenge to Investors

No cycle lasts forever, even a distorted one, and this cycle will need to end at some point as well. But until the end of this cycle arrives, the volatility paradox can persist for a surprisingly long time. If pressures on balance sheets are high enough, the risk is that the cycle will end in a spectacular collapse. This brings us to an important point. Market Zombification presents a serious challenge to active managers. Intermittent mega-spikes in the VIX and other volatility indices increasingly occur from a low volatility base — often without much warning. This forces investors to make a difficult choice: If they stay out of the market, they collect no return; however, if they buy and hold equities or risky bonds, they may collect a small premium, but have to accept the risk of a large and sudden drawdown in return.

Taking on an over-extended market by selling futures against it is a dangerous alternative. Frothy markets have a tendency to become even frothier in the near term. Moreover, the timing of a market reversal is nearly impossible to predict in advance, which is why shorting bubbles can lead to catastrophic losses. Finally, buying insurance through the options market might seem to be a theoretically sound idea and actually is, given enough skill and over a long enough horizon. However, options strategies that decay over time require immense patience from investors in an environment when many other investors are piling on risk and Central Banks are standing guard. While it is true that active managers can blend long and short volatility strategies in their portfolios, the core problem remains an intractable one.

An Analogy with Waiting Times

At some point, the credit cycle will turn, dragging equities and other risky assets into a bear market. Prices may drop quickly without recovering. If yields normalize somewhat, bonds may also sell off. This will be doubly toxic if we see a wave of defaults, as institutions are no longer able to finance their debt. Institutions that target a fixed return without too much regard for risk (think pensions and insurance companies) will take large losses in this scenario. It may turn out that options-based hedging is the only truly diversifying strategy left to investors if the stock and bond bubbles burst simultaneously.

The trouble is that we don’t know when the cycle will turn. Many observers with a bearish disposition argue that every passing day makes the risk of an imminent liquidation more likely. This may well be true, but a simple analogy shows the dangers in this assumption. Imagine that you are waiting for a friend. If someone issued a guarantee that your friend would be no more than an hour late, the odds that he or she will arrive in the next 5 minutes would increase rapidly over time. After 55 minutes, the probability of arrival in the next 5 would be 100%. However, this doesn’t correspond with experience. The longer you are kept waiting, the less likely that your friend will be coming anytime soon. Something material may have happened, which has qualitatively changed the distribution of arrival times.

Qualitative Features of Zombification

In this new era of increased systemic risk, it appears that the economic cycle has been damaged — perhaps permanently. As we have suggested above, the price action we see across markets reflects this new reality. Equity sell-offs, such as the events we observed in February 2010 and December of 2018, now occur spontaneously and often materialize out of nowhere during periods of low volatility. While these sell-offs are quick to arise, they also seem to be quickly forgotten by the financial media and even market participants.

Historically, this was not always the case. The VIX and other implied volatility indicators tended to decay quite slowly after a spike. The market at large had a longer memory. Slow decay was reflected in the various econometric models that were developed by practitioners and academics alike. In the current market, however, “melt ups” are almost as violent as the meltdowns and V-shaped recoveries are increasingly common. Volatility tends to collapse quickly as investors jump back into “risk-on” mode, in an attempt to recover profits and make up for lost time in the markets.

Viewed through a wider lens, equities and fixed income have both been trending upward for an unusually long time. As of this writing, the S&P 500 has increased by a factor of 5X since 2009, while US bond prices have enjoyed nearly 30 years of steady positive performance. Credit markets have been underpinned by several rounds of Central Bank monetary easing in each of the major economies. Since corporate credit and equity are linked, Central Banks have effectively acted as a backstop on the S&P 500 and other large-cap equity indices. In the meantime, US government bonds have received a nearly continuous bid from institutions. In a deflationary environment, where loans are increasingly collateralized, the demand for sovereign debt has been remarkably high.

Figure 1.1. 30 Year Secular Bear Market for Government Yields.

Moreover, the zero interest rate policies implemented by Central Banks have incentivized excessive risk-taking in other areas. Rather than maintaining their strategic asset allocation weightings and simply accepting the lower forward returns that the current environment offers, investors have piled in en masse into riskier corporate bonds, illiquid assets and various short volatility strategies. Consequently, excess demand has reduced the amount of compensation they now receive for bearing risk. For example, many pensions with an annual return target of 6% to 7% have simply ramped up the credit and liquidity risk exposure in their portfolios, with something of a cavalier attitude toward extreme event risk.

Many observers, including the authors of this book, do not believe that these dynamics are sustainable indefinitely. In general, central banks can control either their domestic yield curves or their currency valuations, but not both at the same time. Lowering benchmark interest rates can increase the velocity of money within a sluggish economy, assuming that sentiment is not too bad. However, that increased velocity tends to come at a cost. Easy money policies have historically tended to weaken currency values, sometimes to disastrous effect.

Under the current regime, it has taken a great deal of Central Bank coordination to maintain a reasonable level of stability across the major currency pairs. In the meantime, alternative forms of exchange with a fixed supply, such as gold and Bitcoin, have rallied. The Central Banks have collectively walked a tightrope in their activities. We would argue that synchronized easing is an inherently unstable process, as the financial system is highly non-linear and sentiment driven. The fault lines for a major dislocation in currency or bond markets are now in place.

The Dilemma for Institutional Investors

Artificially low yield curves have forced many investors into high-risk areas. We might, for example, consider the case of a hypothetical European pension fund that is currently underfunded. In this example, using historical yields as our reference point, the situation has become dire. For the sake of simplicity, assume that the pension fund needs an average forward return of 4% on an annualized basis to meet its expected future liabilities. Current government bond yields fall well short of that threshold, as Figure 1.1 clearly indicates.

One potential investing strategy would be to substitute Euroland debt with US Treasury bonds. US bonds offer a modestly positive return over time. A 1% return might be a drag on a 4% return target — but something is better than nothing, right? Unfortunately, the added yield from US Treasuries introduces currency risk for European investors. Any attempt to hedge dollars back to Euros will cancel out the yield in US Treasuries. It follows that the pension fund in question needs to be an implicit currency speculator in order to get some yield from this strategy.

The other, far riskier alternative is to buy lower-quality credits, moving further down the capital structure in the process. This requires an invocation of the so-called “Fed put,” which is now the stuff of legend. The theory goes that Central Banks will bail out anything and everything that might be large enough to cause collateral damage to the economy. Following this line of thought, Central Banks have become an across-the-board backstop for virtually all risky assets.

If this theory were correct, it would be perfectly logical to buy the highest yielding loans possible. In the authors’ view, however, this smacks of overconfidence. It is impossible to say with certainty what the Fed and other Central Banks might do if push comes to shove in the credit markets. The magnitude of QE required to calm things down may be met with political resistance among a host of other factors. What we do know is that many large buy side investors have been forced to take on enormous levels of risk in an effort to generate high single digits returns, when comparable returns could have been easily achieved with government bonds 20 years ago.

Given that investors crave yield, corporations have been happy to supply it. The following graph tracks the quantity of US corporate debt issuance over the past 25 years.

Figure 1.2. Historical Time Series of US Corporate Bond Issuance.

If we drill down a bit, we can see that companies that barely qualify as investment grade have been particularly active in their debt issuance.

Figure 1.3. Historical Time Series of US High Yield Debt Issuance.

The indiscriminate search for yield has offered enormous benefits to companies that are large enough to securitize their debt. Corporate treasury departments have been able to issue new bonds with low coupons, reducing the burden of servicing their debt. Persistently low yields have led to narrowing credit spreads, as investors are willing to accept a large amount of risk per unit of incremental return. This, in turn, has allowed corporate treasuries to issue new bonds with low coupons. The large overhang of debt and leverage in capital markets has had destabilizing effects on the financial network.

External and Network Risks

We now need to define a few key terms that will be helpful in our characterization of modern markets. Concisely, moderate exogenous shocks can drive increasingly large endogenous liquidations and squeezes. At the risk of stating the obvious, endogenous risks come from within the financial system, emerging from the complex interaction of agents who form the network. By contrast, exogenous risks affect prices from the outside and can arise from a wide variety of sources, such as geopolitical events and changes in technology.

There are grey areas in this coarse decomposition. Corporate earnings, for example, have both an exogenous and endogenous component. On the one hand, corporate earnings constitute news flow that affects prices once they are released (exogenous); on the other, companies are part of a global financial network, and their earnings are a function of transactions within the network (endogenous).

Endogenous network risks largely arise from a combination of factors: complex counterparty exposures, excessive leverage and overly concentrated exposure to certain asset classes or strategies. Counterparty risk played a major role in the Great Financial Crisis. It was impossible to untangle the network enough to know how much exposure to the mortgage markets a given bank faced. This caused the short-term financing markets to seize up, as the major banks doubted the solvency of each other. These markets are the lifeblood of the financial system. Leverage and over-exposure are loosely connected: when credit in the system is excessive, it eventually gets directed toward unproductive areas. This is the source of the various speculative bubbles we have seen over time. However, positioning risk can play a role even when Central Banks are not particularly dovish, e.g. when investors sell their core positions to chase returns in another asset class.

We can represent the financial system visually as a large graph. It consists of circles, or “nodes” of variable size and lines between nodes. The lines can also have variable width, based on the connection strength between two nodes. The diagram below provides a stylized view of the global financial network.

Figure 1.4. Slightly Cartoonish Representation of the Global Financial Network. Courtesy www.interaction-design.org.

Nodes are agents in the system, such as governments, banks, companies, and households. When two agents transact with each other, they are joined by a line. Banks are the largest nodes, based on the size of their balance sheet and the sheer number of connections with corporations, individuals, and other financial institutions. Banks are similar to major airport hubs, as a disproportionately large number of financial transactions are directed through them. Market makers, including those in the algorithmic trading space, are also large nodes, based on the percentage of order flow they service.

Conceptually, a financial network can become dangerous if the web of connections becomes too complex and convoluted or certain nodes increase beyond a reasonable size. For example, if the global banking system has become too interconnected, a shock to any part of the system may propagate throughout it and cause damage to large swathes of the network. This offers a more precise description of the source of defaults and large-scale price moves observed during the Global Financial Crisis than the tangled web one above. Naturally, Central Banks are going to be larger than the typical household, so the real question is whether a node or related collection of nodes is acting out of proportion to its usual size. Bloated nodes can destabilize the financial network, increasing the odds of an extreme price move, as we will see in the sections below.

Vulnerability Not Predictability

This book is decidedly not a treatise on market timing: instead, we are largely concerned with market vulnerability. Over time horizons longer than just a few seconds, it is nearly impossible to know for certain when a sharp selloff is going to occur. Even when looking across the very short time scales of high frequency trading, price action has a large component of randomness. To frame the argument more generally, a limit order book provides an incomplete and imperfect overview of where prices are likely to go from one moment to the next. The implication is that timing is always going to be elusive. As time horizons increase, the problem rapidly becomes more intractable. On longer time scales, randomness plays an ever-larger role, and the range of potential outcomes increases.

What we can do, however, is identify market configurations that are dangerous from a structural standpoint. These are the “market tremors” that give this book its title. Markets are constantly exposed to random shocks of varying sizes that are inherently unpredictable and essentially beyond categorization. Even if we were able to create a comprehensive list of externalities that influence corporate earnings or economic growth, for example, other market participants might already have done the same analysis. Many of the external factors that drive price action are already baked into the market at any given point in time.

Given this understanding of the uncertainty in markets, what options remain for investors to pursue? To pose the question more specifically, if attempting to build a comprehensive and predictive economic model is a fool’s errand, where might we more profitably focus our attention?

A wiser course of action, in the authors’ view, is to accept that random shocks occur as a matter of course in markets — and to focus instead on regime identification. In the simplest terms, what we are looking for are the repeatable pre-conditions for a major liquidation or a spike in volatility. Specifically, we want to know in advance when a shock of moderate size is likely to have an unusually large market impact. Under these circumstances, realized volatility might be low, but disequilibrium lurks beneath the surface. To a certain extent, these vulnerable market setups can be identified since they tend to follow predictable patterns. This is a major theme which we will expand upon at length in this book.

When the interbank lending market breaks down, as it did during the Global Financial Crisis in 2008, two things generally happen. First, highly liquid assets that can be easily posted as collateral, such as Treasuries, rally hard; currencies required for global settlement — especially the US dollar — also rally, because dollars are needed to close positions. Second, strategies that expose market participants to equity or credit risk are liquidated. The notion of diversifying across multiple risk premia capture strategies becomes secondary.

To put the general thesis into more practical terms, the two high-risk setups we will examine in this book are the following:

First, when the amount of leverage in the system is unsustainably high. When leverage rises, the price of risk assets inflates. This can lead to an Everything Bubble, such as the one we have largely experienced for the past decade, where the prices of stocks and bonds have both risen dramatically. When leverage is high enough, even a moderate change in market conditions will force certain agents to either liquidate their positions or to hedge them aggressively. This offers a rough explanation for the extremely sharp, short-lived sell-offs we have seen in the past several years.

Second, when certain market participants are over-concentrated in a specific asset or class of assets. When this occurs, too much of the available supply of cash and credit has been deployed into a segment of a market, which causes an asset bubble to form there. These asset bubbles can easily burst after the last marginal buyers come in during the late stages of an exhausted market.

One manifestation of over-concentration is the “pain trade”. This phrase has gained a great deal of currency over the years. The pain trade is the one that will force the largest number of speculators out of the market in one go. In rising markets, there are actually two possible pain trades. If there is a large amount of tactical short interest, the pain trade can be a “melt up” where equity indices power through recent highs. Shorts have to cover their positions to avoid outsized losses. Otherwise, it tends to be a reversal, as momentum traders who have increased positions during the rally get flushed out. In bear markets, the pain trade oscillates quite rapidly between a rebound and a collapse. Shorts pile into downtrends but have tight risk controls. This can increase the degree of short-term mean reversion. Prices move sharply down; however, given a mild positive shock, momentum traders have to buy or cover their positions. Mean reversion is high as the market zigzags up and down. The next item on our agenda is to provide some intuition about how positioning risk arises.

The Two Asset Base Case

The following example provides a basic framework for understanding how we will be discussing positioning risk in this book. It is important to note that positioning risk is not observable in the historical price series for a given asset. In our discussions of this topic, we have used Bookstaber ( ) as a guide.

Imagine a highly simplified example where we own two assets, ‘Asset A’ and ‘Asset B’. Both assets are trading at $100 and have a realized volatility of 15%, measured over some past time interval. In the absence of any other information, assets A and B would appear to be equally risky, as volatility is the only input we are using to measure uncertainty.

Now, suppose that we add an extra piece of information that clearly impacts our risk model, but does not fit into a traditional risk management framework. Imagine there is a highly leveraged investor who holds a large position in Asset A. This investor has tight risk limits and will be forced out of the market for Asset A if the price drops below $97.50. In other words, after an initial $2.50 drop in the price of Asset A, our highly leveraged investor will have no choice but to hit the SELL button — and will need to liquidate the position in large blocks. By contrast, Asset B does not face the specter of liquidation risk, because it has a genuinely diversified pool of investors who are effectively unlevered.

So, which asset is riskier in our stylized example? Clearly Asset A is riskier, even though no price movements have occurred yet. Given a moderate, -2.5% down move, an investor who holds a long position in Asset A may be in some serious trouble.

Assuming that Asset A is normally distributed with 15% volatility, a down week of -2.5% or more would be expected to occur roughly 11.5% of the time. (Note that, for an asset with 15% annualized volatility, a -2.5% down move in 1 week is about 1.2 standard deviations below 0.) Something with an 11.5% probability of occurring would certainly not be classified as a rare event. However, once the large investor is forced to sell in size at that key price level, prices may drop even further because of the price impact of the selling. What started as a modest shock now has the potential to morph into something much larger. The quantity of Asset A on sale has increased dramatically, with no change to demand.

Other agents in the system will only be willing to absorb the excess inventory of Asset A at a large discount, if at all. We are now faced with a situation where a moderate random drop has pushed Asset A’s price into the danger zone. Suppose that the impact of the large investor’s sell order is -5%, which corresponds to a total move of -7.4% for Asset A. For an asset with 15% volatility, the stated odds of a -7.4% or larger 1 week decline are 0.02%. Positioning has transformed a garden variety sell-off into something that can easily qualify as an extreme event, if not a Black Swan.

Now suppose we like both Assets A and B equally, for example in terms of their future cash flows. We might then expect both assets to have comparable returns over a given time horizon. Without taking positioning into account, it would be reasonable to allocate the same amount of capital to assets A and B. This would reflect their historical volatility, along with our return expectations for each asset. However, given our deeper understanding of the problem, we need to allocate less to Asset A, as it has significantly higher structural draw down risk. In the language of classical portfolio theory, the “true” volatility of A is much higher than 5% — and added caution is required. The significant point here is that our leveraged investor has impacted the distribution of forward returns for A. The historical distribution needs to be modified in some way before we can use it to allocate capital responsibly.

Textbook Description of Risk

The stylized example above is instructive — but it only accounts for two assets held by a single agent in a highly simplified sample problem. In the real-world financial system, there are billions of financial agents transacting in the network with complex overlapping exposures between them. In order to adjust the standard risk measures in a meaningful way, we need a realistic model of positioning risk that can accommodate the complexity of the real-world network.

Can we generalize our example to these more pragmatic, real-world cases? Happily, we can apply some very helpful ideas from statistical physics and game theory to reduce the network’s complexity while retaining its most important features. We will describe the underlying methodology thoroughly in Chapter 2. In this section, we will simply attempt to define the scope of the problem.

It is helpful to divide risk models into two categories: first, models of price dynamics that do NOT make reference to specific agents within the financial network, and second models of price dynamics that do account for agents in the network. The models from each category form two poles of theoretical difficulty and computational feasibility.

First, if we ignore the network of agents entirely, we get a simplified representation of market reality. These simplified models are the type that appears in introductory finance textbooks. Markowitz ( ) is usually given credit for developing Modern Portfolio Theory, which was further developed by Sharpe ( ) and a legion of other financial economists. The original Markowitz model equates risk with the variance of a distribution, while more sophisticated models allow for return distributions that are not normal and vary over time. Second, at the other extreme, models that account for the full specification of the financial network can be incredibly complex.

We can begin by examining models that do not account for the full specification of the network in determining price. In the simplest version of this model, asset returns are assumed to evolve according to a random walk with a 0% average return. Prices go up and down from one time step to the next without reference to the recent trend. The direction of future movements is entirely unpredictable. The only thing we can infer is the range of future outcomes, based on the volatility of a given asset in the past. In this category of model we do not need to understand anything at the granular agent-to-agent interaction level. Following the theory, the most important properties of a system are best observed after averaging over the very many trades that go through the market. This simplifies things from a practical calculation standpoint, as it is far easier to collect a bunch of historical prices than develop a mechanistic model of price action from first principles.

The random walk approach makes several stringent assumptions. Significantly, today’s returns do not depend on the pattern of historical prices or any information that was known to investors in the past. In other words, asset returns have no memory. Over time, paths are effectively created by a series of independent random draws, which are analogous to repeated coin flips or spins of the wheel in a game of roulette. In addition, the distribution of returns does not change over time. We can phrase this in another more revealing way: while future outcomes are uncertain, the rules of the game do not change over time. The only source of uncertainty is which return will be drawn from the range of possibilities at a given point in time. The likelihood of any given return within the range is constant.

These assumptions underpin the various incarnations of portfolio theory that appear in finance textbooks. Using some ideas from probability, it can be shown that the random walk hypothesis implies that returns are normally distributed over time. This has an important implication. Normal distributions have very narrow tails. In more technical terms, 99.7% of all returns fall within 3 standard deviations of the mean, as we can see in the following diagram.

Figure 1.5. Normal Distributions Assign Virtually 0 Probability to Moves Much Larger than 3 Standard Deviations Up or Down. Courtesy www.geyerinstructional.com.

Moves that are larger than four standard deviations essentially never occur. When outliers are as rare as this, we can conclude that the “tails” of a normal distribution are extremely thin. The possibility of high synchronization within the financial network, to the point where extreme events occur, is virtually negligible. It would virtually take a Martian landing to move prices far into the left or right tail of the distribution. In the textbook description, it also holds that no single player has the power to disrupt the distribution that the collective has created: only the activity of the collective is observable. All of this makes our stylized two-asset example above rather perplexing. Textbook theory is clearly violated when network effects dominate the system, since the simplified models cannot easily account for multiple standard deviation moves that occur without warning.

This does not by any means imply that Modern Portfolio Theory is useless. In most regimes, asset returns can be approximated by something that resembles a normal distribution. Textbook theory offers a reasonable approximation of reality when nothing unusual is happening within the financial network. Framed in slightly more technical language, portfolio theory provides a good framework for understanding price dynamics when endogenous risks — or risks that arise from within financial networks, rather than from external news flow — are not warping the distribution of returns.

As we mentioned previously, a long line of improvements have been made to the original random walk model over the years. These changes to the theory attempt to account for what is actually observed in financial time series data. In later-generation econometric models, for example, volatility is allowed to change over time. Even more complex models allow the correlation across assets to vary, both as a function of time and as a function of how far the market has moved. However, none of these approaches deals with the risks that emerge from within the financial network directly, such as complex feedback loops and the propagation of credit through the system. Framed slightly differently, even the most complex second-generation econometric models ignore financial flows, the amount of leverage in the system, and where that leverage is allocated in the market.

Credit and positioning are the twin heralds of risk — and yet they rarely appear in portfolio theory textbooks at all.

A Parallel Universe

It is possible to imagine a financial universe with relatively low network risk. In the low network risk universe, prices would reflect information flows and investor sentiment without distortions or amplifications. In other words, bubbles and liquidations would be far less severe, and classical portfolio theory would offer a reasonably accurate reflection of markets there. Large price gyrations would generally only occur after a dramatic and unexpected shock to the broader economy.

In the alternate reality we have envisioned, money and credit would flow more slowly through the system, and counterparty exposure would not propagate far from the source. Major currencies might be “hard,” meaning backed by gold or another asset with limited supply, which would put a brake on the ability of governments to issue large quantities of debt while using their Central Banks to suppress financing costs. Nation states could no longer debase their currency to increase exports or artificially stimulate demand.

Moreover, banks would revert to their original mandate, focusing on making well-researched loans to local businesses and individuals. The loans themselves would be conservative, generally requiring significant collateral. The derivatives markets, which allow the transfer of undiversifiable risk to the global capital markets, might still exist — but the derivatives would be exchange-traded and restricted to futures and options with simple payout structures. Finally, there would be less commoditization of investment strategies, and copycat investing would play a smaller role. Pensions and insurance companies would feel less pressure to manufacture “carry” by selling options, taking excessive credit risk, or over-allocating to global equity markets. (In this context, “carry” refers to any investment strategy that generates a steady return only as long as the market remains stable.) The land grab for yield in our alternate universe would be contained. Financial markets, along with the global economy, would be largely de-centralized.

This alternate universe would also dramatically slow the capital formation process by restricting credit flow through the financial system. From the perspective of investors who use leverage to exploit small price inefficiencies, it would also be an exceedingly dreary place to trade. A hedge fund manager transported from our world to the low network risk universe would be tempted to take a nap or reschedule Happy Hour to the early afternoon.

Still, in this monochrome alternate reality, the risk of global financial contagion would be extremely low. We could ignore the possibility of significant endogenous risk almost entirely. Individual banks might go out of business, but they would rarely blow up. If a bank defaulted, it would be very unlikely to drag down other financial institutions into insolvency with it. Similarly, government budgets would be constrained by the need to hold physical gold, which would keep currencies relatively stable.

As we will discover in subsequent chapters, this alternate reality bears little resemblance to our own. Not only do exogenous events affect prices in real-world markets, but complex feedback loops within the financial system can also cause them to spread and intensify — like a forest fire through bone-dry grass and timber. In fact, network effects may play a larger role in financial crises than pandemics, wars, and natural disasters in determining the distribution of asset price returns.

The Full Network Model

At the opposite end of the spectrum from the low network effect universe, a full-blown network model of financial markets can be extremely complex. The full network effect model relies on the idea that, within the network, every transaction and every agent matters. To repeat, any individual or entity that completes a financial transaction must be added as a node to the network model. Additionally, any transaction with a new counterparty creates a new network connection.

This broad analytic framework is capable of explaining or at least describing many phenomena observed in the markets that traditional models cannot. Leverage, overlapping exposures and counterparty risks clearly have played a significant role in the bubbles and crashes that have occurred throughout history. We also know that certain agents, such as Central Banks, have the power to change price outcomes — although those modified outcomes are never entirely predictable. Using a network model, we have the flexibility to generate virtually any form of price action observable in markets.

The network-based description of markets is obviously valid on a transaction-by-transaction basis. Therefore, an aggregation of all transactions must be correct by definition. Asset prices could never move if no transactions ever took place. We can restate things in a more literal-minded way. The market consists of a large number of agents, such as banks and institutional investors. These agents obviously play an important role in determining market price levels. For example, if we somehow removed banks from the network, most asset prices would be lower than they are today. There would be less credit available to create demand for financial assets. Network connections clearly also play a role. Agents with a high degree of connectivity to other agents within the financial network play a critical role in market stability. Banks and very active players in the derivatives markets (think Long Term Capital in 1998) fall into this category. These agents need to be monitored at some level, as they can cause considerable damage to the network if forced to unwind their positions. Network counterparty risks can be exposed when credit dries up. The Global Financial Crisis in 2007 and 2008 is effectively a demonstration of risks that can emerge from high levels of connectivity across a global financial network. Network counterparty risks can be exposed when credit dries up.

While conceptually correct, network models force us to consider a vital question: do they have any practical use in the real world? When we try to draw practical conclusions from a network model, several cracks begin to appear. A well-specified model might generate the right sort of complex phenomena — but it is generally not suitable for making predictions. (We are not referring to price prediction here, which can be a Herculean task; rather, the network models we know of do not generate concrete numerical risk estimates either.) Using a network model, there are no associated volatility or drawdown forecasts. These models speak a different language from most risk managers. There is no direct way to convert network risk to a number that allows for position sizing. For example, we cannot make assertions such as: “the amount of credit available to a certain subset of agents in the network has recently increased by 50%; therefore, the volatility the assets they are exposed to should be decreased by 10%”. Nothing is stated in cold numerical terms — implying that quantitative investment decisions cannot be made on the basis of a network model.

Referring to Haldane ( ) and Bookstaber ( ), complex network models are currently useful as simulation tools. This is still an important function. Economists can explore the range of outcomes that a policy change might produce without requiring an explicit risk number. Rough testing can be performed. For example, Central Bank policy levers can first be pulled in a simulated network, without immediately performing a Frankenstein-like experiment on the real economy. A well-specified model allows policymakers to build some intuition about possible outcomes of a given action — before turning on the real-world liquidity spigot.

Steering a Middle Course

These ideas all lead us to a critical point in our discussion, where we have to deal with the two crucial issues associated with building a network model. Specifically:

1. Can we build models that are complete enough to capture endogenous risk within the financial network?

2. Will these models be simple enough to generate stable quantitative risk estimates that can be compared with real world outcomes? Any model that relies upon full specification of the network is likely to be unstable, given the highly nonlinear nature of financial networks.

Our goal is to put financial network models on a firm practical footing.

As we begin to explore these issues, it is important to summarize the main challenges we face in our endeavor. First, within a complex network, we do not know how every agent is currently positioned, let alone their typical behavior patterns. This means that we cannot easily predict future positioning. The rules of the game are not known at a granular level. If we, the authors, do not know our own utility functions, it would seem unreasonable to specify anyone else’s.

As we will discover in Chapter 2, even a near-complete knowledge of these factors would not be enough. Building a complete model of the economy from first principles is computationally infeasible. There are simply too many agents, financial products and counterparty exposures to contend with. As we have mentioned above, a full-blown model is also likely to be unstable. With all the moving parts and the introduction of leverage through the banking system, small changes in model assumptions are likely to generate vastly different results. It is widely accepted that leverage has an asymmetric impact on the financial network. While a moderate increase in the amount of credit available does not tend to have too much impact on prices, an equivalent decline can be highly destabilizing.

However, there is a path through the noise — at least under certain conditions. Our solution is to use a concept from statistical physics called Mean Field Theory (“MFT”) to create a hybrid between classical portfolio theory and complete specification of the financial network. If we had to summarize the core idea behind this book on the back of a postcard, this would be it. We will develop this idea in great detail as we go along.

It is important to understand that we are not trying to make a small incremental change to textbook theory here, for example, by allowing volatility to vary over time within a standard model. Those improvements can be significant — but we are seeking a much larger conceptual reinterpretation of risk here. The Mean Field approach is developed in Chapter 2 and constitutes the main theoretical idea in this book. Note that our underlying idea is not original, but the practical applications of MFT will be.

Explaining the Core Idea

We can now explain how our MFT approach works at a conceptual level. We will save a more quantitative treatment of the subject for later chapters. For now, we will roughly outline how to develop a hybrid of standard portfolio theory and a complex network model. Our goal will be to capture the most important features of the financial network in specific cases, without adding any unnecessary complexity.

We will restrict ourselves to liquid instruments, such as ETFs, listed futures and options in what follows. In most markets, most of the time, the historical distribution of returns is a decent proxy for the range of plausible outcomes in the future. This is true even if nothing very dramatic has happened in the recent past. Textbook portfolio theory is appropriate here. The historical distribution effectively characterizes the average behavior of all agents in the financial network in the past, as well as likely behavior in the future. No single investor has the power or at least the incentive to change the distribution materially. Prices usually do not have to move too far to find a nearly equal balance of many buyers and sellers. We wind up with a distribution that does not give much information about the likelihood of extreme outliers, but is useful for assigning probabilities to more moderate outcomes. (Note that there are many ways to estimate a historical return distribution and no universal agreement as to the best way. We also remark that we have not specified a time horizon for the distribution. Are we characterizing returns over 1 day, 1 week or 1 year forward horizons, say? We have glossed over these difficulties to make our narrative as clear as possible.)

The historical distribution can be thought of as an estimate of the so-called “Mean Field” that every investor faces. It represents the average behavior of the vast financial network, as it relates to the price movement of a given asset. We will use the terms historical distribution and Mean Field somewhat interchangeably in what follows. Both represent some average over the actions of a large number of agents in the system. The difference is that a Mean Field can be more easily modified to account for the presence of very large players in the network. Clearly, some agents are many orders of magnitude larger than others. Banks, for example, have vastly larger balance sheets than a typical household. However, assuming that these large agents do not act differently from the way they have in the past, the Mean Field approximation is likely to suffice. Significantly, we do not need to have a detailed understanding of the players or their motivations to make practical investment decisions. Quoting Louis-Lions ( ), “things average out”. A macroscopic picture of risk is sufficient.

Suppose we know the Mean Field for a given asset. As we have said, it reduces to a distribution of returns. Then, we simply need to decide, based on our risk tolerance and other constraints, how much exposure to the distribution we want to have. Different assets naturally have different distributions. For example, a risk averse investor would allocate more to an asset with a narrow spread of returns, such as Treasury Bills, rather than something whose distribution is much wider, such as Natural Gas futures. Treasury Bills have far more certain future outcomes than Natural Gas futures.

This brings us to our main point of departure from standard risk models. The standard models fail when things do not average out. These situations cannot be discounted, as they often lead to outsized market moves. When certain agents grow to an abnormal size or act unusually aggressively at the margins, historical distributions are no longer indicative of future risk. A qualitative change has occurred, that has the potential to affect future outcomes.

All agents might initially face the historical distribution. In addition, assuming that prices remain in a narrow range, the presence of the abnormally large agents may not be felt for a long time. Then, the historical distribution may still seem to be valid. However, it is on very shaky ground. The market tremors are lurking below the surface, changing the tails of the distribution. Given a large enough price disturbance generated by the original (historical) Mean Field, things will change dramatically. As the dominant players are forced into the market, they have the capacity to push prices much further in the same direction. They serve to amplify random fluctuations beyond recognition. 2 standard deviation moves according to the original distribution can easily turn into 4+ standard deviation ones. These would be considered highly improbable in the absence of network effects.

Extreme price changes are a direct consequence of price impact. When the mega agents are forced to hedge or liquidate positions without regard to value, prices can accelerate at an alarming rate. Large unidirectional trades are the drivers of outsized moves. This will require the development of new methods. As we attempt to measure network risk, rather than simply describe it, we will need to develop techniques for estimating impact in Chapters 5 and 6. What we wind up with is a hybrid system that is computationally closer to textbook portfolio theory than a full network model, but clearly incorporates feedback within the network.

Econometricians might refer to the set up as a Mean Field Game with majority players. However, we will not concern ourselves with the strict accuracy of this statement for now. In practical terms, we have a feedback loop between the historical distribution and a few large players. If the distribution creates a path that forces the large players to act in size, their actions will change the distribution. It is as simple as that. Leverage and positioning risk were always there, but have now risen to the fore. Once the distribution changes to account for unexpectedly large price moves, another feedback loop is possible. Things can become even more extreme, as the same or other agents are forced to sell or cover their shorts again.

Contrast with Reactive Approaches

The Oscar Wilde quote at the beginning of the chapter now seems particularly relevant. Waiting for a volatility spike before reducing exposure can be dangerous. Some practitioners argue that modern risk systems are responsive enough to get you out of trouble reactively. Assuming this is correct, you can wait until the market is giving signals of distress before exiting. Prices are sampled at reasonably high frequency, numbers are crunched rapidly and the models flash RED when there is a potential break in the market. We accept that there is some merit to this approach. It is perfectly reasonable to use as much price data as possible and scale in and out of positions using automated execution strategies.

However, if relied upon too much, the faster systems can also lead to overly aggressive positioning. This is especially true in a Zombified market. Suppose that the market has been in “risk on” mode for a while, with rising equity and corporate bond prices and low volatility. Intraday ranges are low and cross-asset correlations stable. The faster systems would actually encourage larger positions than usual, based on low levels of short-term realized risk. They are ill-equipped to deal with selloffs that emerge very rapidly, such as the US equity flash crash in May 2010. Algorithmic trading has increased the speed of crashes in various markets beyond the capability of most dynamic allocation schemes. It has also increased the number of false crash signals, where a given contract recovers almost as quickly as it fell.

An Emphasis on Practicality

We must admit that this book does deal with several theoretical concepts. In particular, we borrow some ideas from statistical physics as a way to reduce the complexity of the financial network in Chapter 2. In Chapter 7, we veer in the direction of Modern Monetary Theory when we describe how liabilities mechanistically transform into bonds and cash. However, we will try to restrict ourselves to ideas that can be transformed into actionable risk numbers. “Hairy fairy” can be intellectually appealing, but is insufficient when it comes to protecting capital. In our opinion, every good theory needs some concrete examples to provide meaning and motivation. At some point, you actually have to calculate things. Theories without meaningful special cases tend to wither on the vine.

Our over-riding idea is to identify dangerous market “set ups”, or traps that have a larger than normal probability of leading to serious losses. As discussed above, the twin heralds of credit and positioning are in play. These situations carry endogenous risk, intrinsic to the financial network. Investors are confident, possibly even complacent. Volatility is low, signaling to the marketplace that there are no significant risks looming on the horizon. However, based on our framework, the market is structurally weak. Things look extremely dodgy from a credit and positioning perspective. In the pages that follow, we will quantify the meaning of contracting credit and over-extended markets. We will put some substance behind the “things look dodgy” phrase. It should then be possible to reduce exposure in advance of a crisis or alternatively, to hedge cheaply before the horse bolts from the stable.

We will develop various strategies to infer who the large players are, how large they might be and how much leverage they might be applying. These are de facto case studies that demonstrate the effectiveness of our approach. It turns out that there are many situations where we can say something meaningful about positioning.

Buying Options in Fragile Markets

While this is not a book on trading strategies, it addresses the following practical question. How can we play the long in the tooth equity bull market or uncertain currency peg, from a trading standpoint? A reasonable strategy is to stockpile insurance in markets that have low realized volatility, yet are structurally fragile. Options are the ultimate “bubble fighters”, as they allow investors to profit from severe corrections with bounded risk. As we have mentioned above, taking on the market directly with contrarian futures trades can be treacherous. Assuming that no important releases (such as a rate decision) are looming on the horizon, assets with low realized volatility are also going to have low implied volatility. Implied and realized volatility are strongly connected.

Ordinarily, we would not expect the spread between implied and realized volatility to be very large. If implied volatility were much higher than realized, it would be easy to sell options, hedge gamma risk with the underlying asset and extract a likely profit. It follows that options on assets with low realized volatility will generally be cheap, once our structural risk indicators start to flash RED.

We emphasize that buying options on a stock before an earnings report or on a currency before a Brexit-type vote is markedly different from what we will discuss here. In that case, market makers ratchet up their options quotes, reflecting very high implied volatility, before an event that is known to have a binary outcome. This eats away at the theoretical advantage of a long options trade in advance of a major event on a known date.

This does not mean that we are guaranteed to make money on a long options position that is theoretically underpriced. Our edge, so to speak, is statistical. Rather, we are putting our money into something that has a very attractive asymmetric payout. We can get more leverage (as a function of dollars invested) from a put or call when volatility is low. When an asset class has low volatility but high positioning risk, the odds skew even further in our favor. The “true” probability of making a large gain from a fixed cost trade is significantly higher than the market thinks. This idea is in the spirit of some of the great macro hedge fund investors, who bury long options positions in their portfolio, both as a hedge and a source of episodic or “crisis” alpha. Once we buy an option, we have a “convex” trade in place. While the probability of winning before maturity might be somewhat uncertain, the payout if we do win will be vastly higher than the premium we have paid for the option. This qualifies as a trading edge. In the late stages of an extended bull market, options structures that would usually be considered hedges transform into potential alpha generators.

Admittedly, it might take a while for structural risk to transform itself into realized volatility. That is the downside to measuring subterranean risk, the risk of being early. We may have correctly characterized the current regime as “calm but unsafe”, which is superior to buying options whenever volatility is cheap, without regard to market internals. Although we have a theoretical edge, our options structures might lose money for a while. However, when they do pay out, our trading gains are likely to more than offset previous losses. Why? We bought options when risk was underpriced by the market.

Structure of the Book

This book takes the following narrative arc. Chapter 2 presents a basic framework for the more detailed chapters that follow. Here, we give an informal overview of Mean Field Theory and specify where it offers a reasonable description of market reality. When the Mean Field description is adequate, textbook portfolio theory applies. However, when certain agents in the network become very large, they have the power to distort the historical distribution of returns. This can create bubbles and crashes over a range of time horizons. In practical terms, we develop a simple algorithm that allows us to adjust forward risk estimates in the presence of a dominant player. Chapter 3 focuses on options market makers, who can have a disproportionately large impact on prices over multi-day horizons. While they might not have large balance sheets, they take at least one side of a high percentage of orders that go through the market. Under certain conditions, market makers can cause large dislocations in the underlying equity and futures markets without much warning. This is a function of hedging, as they seek to reduce directional risk in their trading books. We will perform some statistical tests that show the impact of market maker positioning on S&P 500 volatility and extreme event risk.

In Chapter 4, we shift our attention to Exchange-Traded Products (“ETPs”). These include listed funds and notes. As we will find, certain ETPs can dominate the markets they are supposed to track. Their impact is measurable and given the right set up, increases extreme event risk. This will act as background material for the case studies in Chapters 5 and 6. Here, we put our knowledge of ETPs to work. Chapter 5 focuses on ETPs that track various VIX futures strategies. We will show how inverse VIX products caused an explosion in volatility during February 2018 that could not be predicted by standard risk models. Our revised estimate, taking ETP positioning into account, demonstrates that the “Volmageddon” was not a Black Swan-type event. Indeed, a volatility spike of this magnitude was likely to occur within the next year or so, based on the forced reaction of ETPs to an initial move. It could have been reasonably predicted using a simplified agent-based model. In Chapter 6, we turn our attention to listed products that track corporate bond indices. Here, the problem is not leverage, but a mismatch in liquidity between ETPs and the cash bonds underpinning the benchmark indices. We study the consequences of the sort of technical sell off in high yield bond ETPs that occurs all the time in equity markets. Our risk estimates point to an unusually large drop in the junk bond market and the potential failure of the ETPs that track them. This turned out to be a very accurate characterization of the events of March 2020. Namely, the Fed had to buy various bond ETPs directly to prevent the underlying markets from crashing. These studies place agent-based risk models on a solid and practical footing.

More broadly, we will find that investors who rely on exchange-traded products to express their views may be in for some nasty surprises. Most passive investments, such as ETFs, can add to extreme event risk in the indices they supposedly track. We will give some concrete examples of exchange-traded products (ETPs) that are designed in such a way that they magnify risk in the market ecosystem. We will also challenge the widely held belief that ETPs give nearly guaranteed exposure to a benchmark of choice.

Chapter 7 focuses on banks and Central Banks (“CBs”) as mega agents in the market. This should be of obvious interest in the post-Lehman market environment. These agents have been large for quite some time. More recently, as the scope and influence of commercial banks has decreased, CBs have become increasingly dominant. We will find that, when CB balance sheet expansion is strongly above trend, credit spreads tend to decline. In addition, an increase in domestic debt can support high equity valuations for a surprisingly long time. For investors and borrowers, CBs tend to improve median outcomes. However, this does not imply that rounds of quantitative easing will reduce extreme event risk in the future. Finally, we summarize the main ideas in the book and give some practical takeaways in Chapter 8.

--

--

Market Tremors - The Book
Market Tremors - The Book

Written by Market Tremors - The Book

This is the Medium account of the Market Tremors book — jointly maintained by Hari Krishnan & Ash Bennington.

Responses (1)