Wednesday, June 29, 2011

Why we GIVE each big bank $12 billion per year

I've been reading quite a lot about the various political battles over efforts to regulate the banking industry, derivatives in particular. This afternoon, quite by coincidence, I noticed on a colleagues' disordered desk a copy of a speech given last year by Andrew Haldane, Executive Director for Financial Stability of the Bank of England. It's message should be a matter of urgent public debate in the US, and elsewhere. Yet I haven't heard Haldane's arguments mentioned anywhere in the US media.

Banks, he argues, are polluters, and they need to be controlled as such. Let me run through several points in a little more detail. First think of the car industry, where producers and users together pollute. Haldane:
"Exhaust fumes are a noxious by-product. Motoring benefits those producing and consuming car travel services – the private benefits of motoring. But it also endangers innocent bystanders within the wider community – the social costs of exhaust pollution."
He then goes on to make the point that we now face much the same situation with bankers, in part due to the proliferation of instruments which have made it possible for banks to spread risks far and wide and make profits events as they amplify system wide risks. The banking industry is also a polluter:
"Systemic risk is a noxious by-product. Banking benefits those producing and consuming financial services – the private benefits for bank employees, depositors, borrowers and investors. But it also risks endangering innocent bystanders within the wider economy – the social costs to the general public from banking crises."
What is to be done about polluters? Back in the 1970s, in the face of the rising air pollution, policy makers, guided by economists, worked out ways to solve the problem through regulations and, where necessary, outright prohibitions of some practices. That's where we are now with the banks. The regulations under consideration recognize the social costs of systemic risk and try sensibly to find a redress. The bankers response has of course been to whine and scream.

The first practical step in addressing a pollution problem is to estimate how much pollution there is. How much do we pay to guarantee bank solvency during these not-too-infrequent systemic crises? Haldane points out that if you count only the monetary amount transferred to banks by governments, the cost of the recent crisis is fairly large -- about $100 billion in the US. But this is probably an extreme lower bound to the true collective cost. As Haldane remarks,
"...these direct fiscal costs are almost certainly an underestimate of the damage to the wider economy which has resulted from the crisis – the true social costs of crisis. World output in 2009 is expected to have been around 6.5% lower than its counterfactual path in the absence of crisis. In the UK, the equivalent output loss is around 10%. In money terms, that translates into output losses of $4 trillion [for the US] and £140 billion [for the UK] respectively."
Now, I'm not one to take GDP loss arguments too seriously. It's not a very good measure of human well being, or even of the narrow economic well being of a nation (in particular, it tells us nothing at all about how much of the store of natural resources may have been eaten up in achieving that GDP). However, the scale alone in this case suggests that the cost of the crisis -- and other crises, which seem to strike more and more frequently with modern financial engineering and its global reach -- are immensely high.

But here is where Haldane's view becomes REALLY interesting -- and goes beyond anything you're likely to see in the popular media (especially in the US).

Lots of people recklessly borrowed in the run up to the crisis; it wasn't only the fault of the recklessly loaning banks. So, how much of the systemic costs associated with financial crises really is due to the banks? One place to look, Haldane suggests, is the valuations given to banks by the ratings agencies, many of which take explicit note of the fact that governments give a subsidy to banks in the form of implicit or explicit safegaurds to their stability. Some give banks two different ratings, one "with government support" and one "without." Haldane goes through a simple calculation based on these ratings, and estimates the effective subsidy to many big banks is on the scale of billions. The data suggests it has been increasing for the past 50 years. The "too big to fail" problem shows up in the data as well -- the average ratings difference for large banks is much bigger than it is for small banks. As Haldane sums up:
For the sample of global banks, the average annual subsidy for the top five banks was just less than $60 billion per year...  the large banks account for over 90% of the total implied subsidy. On these metrics, the too-big-to-fail problem results in a real and on-going cost to the taxpayer and a real and on-going windfall for the banks."
Haldane's speech -- as far as I'm aware -- is unique among high-level banking figures in taking seriously the public costs entailed by banking practices. Imagine where the US might be today if we'd taken that $60 billion per year and, over the past decade, invested it in infastructure, education and scientific research.

In the rest of his speech, Haldane explores what might be done to end or control this pollution, through taxes or by making some banking practices illegal. This too is illuminating, although what emerges is the view that finding a solution itself isn't so hard. What is hard is gathering the political will to take steps in the face of banking opposition. Publicizing how much they pollute, and how much we pay to let them do it, is perhaps a first step.

Friday, June 24, 2011

PIMCO questions US financial focus

It's quite something when the managing director of PIMCO comes our and announces that the US has gone too far in seeking "wealth creation via financial assets," rather than boring things like science and manufacturing. Sean Paul Kelley at The Agonist points to this, which is worth a read.

Thursday, June 23, 2011

(Corrupt) Lawmakers challenge derivatives rules

I'm not the least bit surprised by this story detailing the push back by well funded (by whom, do you think?) US senators and representatives against proposed rules to reign in derivatives. Here's some choice material:
The lawmakers, Republicans and Democrats alike, argue that some proposed rules could force Wall Street’s derivatives business overseas. They also say that regulators are ignoring a crucial exemption to the rules spelled out in the Dodd-Frank financial regulatory law.

The law excused airlines, oil companies and other nonfinancial firms known as end-users from new restrictions, including a rule that derivatives must be cleared and traded on regulated exchanges. The firms use derivatives to hedge against unforeseen market changes, say a rise in fuel costs or interest rates, rather than to speculate.

“We are concerned that recent rule proposals may undermine these exemptions, substantially increasing the cost of hedging for end-users, and needlessly tying up capital that would otherwise be used to create jobs and grow the economy,” Senator Debbie Stabenow, Democrat of Michigan and chairwoman of the Senate Agriculture Committee, and Representative Frank D. Lucas, her Republican counterpart in the House, said in a letter this week to regulators.
I particularly like the phrase "airlines, oil companies and other nonfinancial firms known as end-users" identifying those who have restrictions. Does anyone doubt that lawyers and accountants at Goldman Sachs, JP Morgan and virtually every other big bank are working overtime right now deciding how they can turn the bank, or some subsidiary in the Cayman Islands, into an airline, oil company or other nonfinancial firm? I would bet they're just looking for a new pathway through which to route all their high risk stuff - and they've counseled the lawmakers on how they can best carve out some useful routes.

The other thing that is simply precious is this (the likes of which we've heard many times already, of course):
The lawmakers, Republicans and Democrats alike, argue that some proposed rules could force Wall Street’s derivatives business overseas.
The proper response would be not worry, but GOOD, PLEASE HURRY UP! Let them take their financial engineering business overseas and blow up someone else's economy.

P.S. Here's a random string of letters and digits: W8ENHMJ8MBKD. Ponder it if you like, but I don't see that it holds any particular meaning or interest. I have to tuck it into one of my blog posts somewhere to get listed on Technorati.

The ticking CDS time-bomb

The looming mess in Europe, linked to financial distress in Greece, looks like a perfect if rather frightening illustration of the malign consequences of over-dense banking interdependence on global financial stability. In this case -- as with the crisis of 2007-2008 -- the root cause of the trouble is CDSs and other derivatives. No one in Europe is quite sure how many reckless gambles banks have made over Greek debt and potential default, and the European Central Bank appears to be deadly afraid that such gambles have the potential to bring down the entire financial house of cards.

What's happened to the CDS market over the past decade? It's exploded. Amazingly, the value of outstanding CDS linked to debt in Greece, Italy, Spain and Portugal has doubled in the past three years -- since 2008!! The New York Times today discusses what can only be described as a ridiculous situation -- Europe pushed to the brink of a financial disaster by the actions of a small number of people gambling with other peoples' money in the dark, and doing so in the direct aftermath of the greatest financial crisis since the Great Depression:
The uncertainty, financial analysts say, has led European officials to push for a “voluntary” Greek bond financing solution that may sidestep a default, rather than the forced deals of other eras. “There’s not any clarity here because people don’t know,” said Christopher Whalen, editor of The Institutional Risk Analyst. “This is why the Europeans came up with this ridiculous deal, because they don’t know what’s out there. They are afraid of a default. The industry is still refusing to provide the disclosure needed to understand this. They’re holding us hostage. The Street doesn’t want you to see what they’ve written.”
 Wonderful. We've known about this danger for at least several years and have done nothing about it. But in fact, we've actually known about such danger for far longer, and have only taken steps to make our problems worse. A couple of years ago CBS aired an examination of the CDS market, and it is still worth watching. One interesting comment from the program:
It would have been illegal [selling CDSs of any kind] during most of the 20th century under the gaming laws, but in 2000, Congress gave Wall Street an exemption and it has turned out to be a very bad idea.

Wednesday, June 22, 2011

Dirty little derivatives secrets...

The first dirty little secret of the derivatives industry -- probably not so secret to those in the financial industry, but unknown to most others who still think financial markets in some approximation are fair and efficient -- is that some of the big banks control the market and expressly inhibit competition to protect their profits. I just stumbled across this still highly relevant exposition by the New York Times of efforts to place derivatives trading within properly defined clearinghouses, and the banks' countervailing efforts to gain control over those clearing houses so as to block competition.

The banks (invoking some questionable claims of economic theory) like to argue that derivatives make markets more efficient because they make them more "complete." As Eugen Fama puts it: "Theoretically, derivatives increase the range of bets people can make, and this should help to wipe out potential inefficiencies." Available information, the idea goes, should flow more readily into the market. But the truth seems to be that derivatives make banks more profitable at everyone's collective expense, and not only because they make markets more unstable (see more on this below). From the New York Times article:
Two years ago, Kenneth C. Griffin, owner of the giant hedge fund Citadel Group, which is based in Chicago, proposed open pricing for commonly traded derivatives, by quoting their prices electronically. Citadel oversees $11 billion in assets, so saving even a few percentage points in costs on each trade could add up to tens or even hundreds of millions of dollars a year.

But Mr. Griffin’s proposal for an electronic exchange quickly ran into opposition, and what happened is a window into how banks have fiercely fought competition and open pricing.

To get a transparent exchange going, Citadel offered the use of its technological prowess for a joint venture with the Chicago Mercantile Exchange, which is best-known as a trading outpost for contracts on commodities like coffee and cotton. The goal was to set up a clearinghouse as well as an electronic trading system that would display prices for credit default swaps.

Big banks that handle most derivatives trades, including Citadel’s, didn’t like Citadel’s idea. Electronic trading might connect customers directly with each other, cutting out the banks as middlemen.

The article goes on to describe a host of maneuvers that Goldman Sachs, JP Morgan and other big banks used to block this idea, or at least to make sure they'd be locked into the gears of such an electronic exchange. Eventually the whole idea fell apart to the banks' relief. Guess who's paying the price?
Mr. Griffin said last week that customers have so far paid the price for not yet having electronic trading. He puts the toll, by a rough estimate, in the tens of billions of dollars, saying that electronic trading would remove much of this “economic rent the dealers enjoy from a market that is so opaque.”

"It’s a stunning amount of money,” Mr. Griffin said. “The key players today in the derivatives market are very apprehensive about whether or not they will be winners or losers as we move towards more transparent, fairer markets, and since they’re not sure if they’ll be winners or losers, their basic instinct is to resist change.”
But there's another dirty little secret about the derivatives industry, and this goes back to the question of whether these instruments really do have benefits, by making markets more efficient, perhaps, or if instead they might make them more unstable and prone to collapse. Warren Buffet was certainly clear in his opinion, expressed in his newsletter (excerpts here) to Berkshire Hathaway shareholders back in 2002: "I view derivatives as time bombs, both for the parties that deal in them and the economic system." But the disconcerting truth about derivatives emerges in more certain terms from new, fundamental analyses of how precisely they can stir up natural market instabilities.

I'm thinking primarily of two bits of research -- one very recent and the other a few years old -- both of which should be known by anyone interested in the impact that derivatives have on markets. Derivatives can obviously let people hedge risks -- locking in affordable fuel for the winter months in advance, for example. But they're used for risk taking as much as hedging, and can easily create collective market instability. These two studies show -- from within the framework of economic theory itself -- that adding derivatives to markets in pursuit of the nirvana of market completeness should indeed make those market less stable, not more.

I'm currently working on a post (it's taking a little time) that will explore these works in more detail. I hope to get this up very shortly. Meanwhile, these two examples of science on the topic might be something to keep in mind as the banks try hard to confuse the issue and obscure what ought to be the real aim of financial reform -- to return he markets to their proper role as semi-stable systems providing funds for creative and valuable enterprise. Markets should be a public good, not a rigged casino, benefiting the few, and guaranteed by the public.

Sunday, June 19, 2011

The Grand Inquisitors of Rational Expectations

In this short snippet (two minutes and 23 seconds) of an interview, John Kay sums up quite succinctly the situation facing Rational Expectations theorists in the light of what has happened in the past several years. Reality just isn't respecting their (allegedly) beautiful mathematical theories.

In Bertold Brecht's play The Life of Galileo, Kay notes, there's a moment when the Grand Inquisitors of the Church refuse to look through Galileo's telescope. Why? Because the Catholic church had essentially deduced the motion of the planets from a set of axioms. They refused to look, as Kay puts it,
......on the grounds that the Church has decreed that we he sees cannot be there. This makes me think of the way some of the economists who believe in Rational Expectations have reacted to events of the past few years. [They're like the inquisitors with Galileo]. ...they refuse to look through the telescope because they know on a priori grounds that what he saw wasn't actually there.

Saturday, June 18, 2011

Millisecond mayhem

The terrifying Flash Crash of 6 May 2010 has long dropped out of the news. The news cycle more of less ended with the release of the SEC's final report on the event in October of last year which concluded that...well... the event got kicked off by a big trade in E-Mini Futures by Waddell and Reed and played out in two subsequent liquidity crises exacerbated -- and crammed into a very short time-sale -- by high-frequency traders. In essence, the report concluded that A happened, then B happened, which caused C to happen, etc., and we had this Flash Crash. What it didn't explore is WHY this kind of this was possible, WHY the markets as currently configured should be prone to such instabilities, or WHY we should have any confidence similar things won't happen again.

I'm not sure what triggered my interest, but I had a quick look today to see if any similar events have taken place more recently. Back in November of last year the New York Times reported on about a dozen episodes it called "mini flash crashes" in which individual stocks plunged in value over a few seconds, before then recovering. In one episode, for example, stock for Progress Energy -- a company with 11,000 employees -- dropped 90% in a few seconds.  These mini whirlwinds are continuing to strike fear into the market today.

For example, this page at Nanex (a company that runs and tracks a whole-market datafeed) lists a number of particularly volatile events over previous months, events in which single stocks lost 5%, 17%, 95% over a second or five seconds before then recovering. According to Nanex, events of this kind are now simply endemic to the market -- the 6 May 2010 events simply seems larger than similar events taking place all the time:
The most recent data available are for the first month and three days of 2011. In that period, stocks showed perplexing moves in 139 cases, rising or falling about 1% or more in less than a second, only to recover, says Nanex. There were 1,818 such occurrences in 2010 and 2,715 in 2009, Nanex says.
 A few specific examples as reported on in this USA Today article:
•Jazz Pharmaceuticals' stock opened at $33.59 on April 27, fell to $23.50 for an instant, then recovered to close at $32.93. "There was no circuit break," says Joe Saluzzi, trader at Themis Trading, because Jazz did not qualify for rules the exchanges put in place after the flash crash for select stocks following extreme moves.

•RLJ Lodging Trust was an initial public offering on May 11. It opened at $17.25 its first day, then a number of trades at $0.0001 took place in less than a second before the stock recovered. The trades were later canceled, but it's an example of exactly what is not supposed to happen anymore, Hunsader says.

•Enstar, an insurer, fell from roughly $100 a share to $0 a share, then back to $100 in just a few seconds on May 13.

•Ten exchange traded funds offered by FocusShares short-circuited on March 31. One, the Focus Morningstar Health Care Index, opened at $25.32, fell to 6 cents, then recovered, says Richard Keary of Global ETF Advisors. The trades were canceled. "No one knows how frequently this is happening," he says.

•Health care firms Pfizer and Abbott Labs experienced the opposite of a flash crash on May 2 in after-hours trading. Abbott shares jumped from $50 to more than $250, and Pfizer shot from $27.60 to $88.71, both in less than a second, Nanex says. The trades were canceled.
 Apparently, according to the Financial Times, something similar happened just over a week ago, on 9 June, in natural gas futures.

I haven't seen anyone who has explained these events in some clear and natural way. I still see a lot of hand waving and vague talk about computer errors and fat fingers. But it seems unlikely these tiny explosions in the market are all driven by accidents. Much more likely it seems to me is that these events are somehow akin to those dust devils you see if driving through a desert -- completely natural if rather violent little storms whipped up by ordinary processes. The question is what are those processes? Also -- how dangerous are they?

The best hint at an explanation I've seen comes from this analysis by Michael Kearns of the University of Pennsylvania and colleagues. Their idea was to study the dynamics of the limit order mechanism which lies at the mechanical center of today's equity markets, and to see if it is perhaps prone to natural instabilities -- positive feed backs that would make it likely for whirlwind like movements in prices to take place quite frequently. In other words, are markets prone to the Butterfly Effect? Their abstract gives a pretty clear description of their study and results:
We study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets. We ask whether such mechanisms are susceptible to "Butterfly Effects" -- the infliction of large changes on common measures of market activity by only small perturbations of the order sequence. We show that the answer depends strongly on whether the market consists of "absolute" traders (who determine their prices independent of the current order book state) or "relative" traders (who determine their prices relative to the current bid and ask). We prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability. Our theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks.
The "absolute" traders in this setting act more like fundamentalists who look to external information to make their trades, rather than the current state of the market. The "relative" traders are more akin, at least in spirit, to momentum traders -- they're responding to what just happened in the market a split second ago and changing their strategies on the fly. Without any question, there are indeed many high-frequency traders who are "relative" traders -- probably most. So mini-flash crashes -- perhaps -- are merely a sign of natural instability and chaos in the micro dynamics of the market.

Friday, June 17, 2011

Real steps on banking reform?

Don't want to get too wildly optimistic. When it comes to banking regulations, disappointment always lies just around the corner and everything important happens behind the scenes. But a couple things today give me some cautious hope that regulators interpreting the new Basel III banking rules may actually take some real steps to curb systemic risks -- they may even take the structure of banking network interactions into account.

First, Simon Johnson gives an excellent summary of recent developments in the US where banking lobbyists seem to have been caught flat-footed by recent steps taken by Federal Reserve governor Dan Tarullo. I hope this isn't just wishful thinking.

The banks are apparently pushing four key arguments to explain why it's a really horrific idea to make the banking system more stable, and why, especially, the world will probably end quite soon in a spectacular fiery cataclysm of the biggest and most well-connected banks are required to keep an additional few percentage points of capital. Johnson dissects these arguments quite effectively and suggests, encouragingly, that regulators at the Fed, charged with interpreting Basel III, aren't convinced either.

Elsewhere, this Bloomberg article suggests -- and this really surprises me -- that the measures under consideration would...
.. subject banks to a sliding scale depending on their size and links to other lenders.
Now this is an interesting development. Someone somewhere seems to be paying at least a little attention to what we're learning about banking networks, and how some risks are tied more directly to network linkages, rather than to the health of banks considered individually. The density of network linkages itself matters.

Research I've written about here suggests that there's essentially no way to safeguard a banking system unless we monitor the actual network of links connecting banks. It's certainly an encouraging step that someone is thinking about this and trying to find ways to bring density of linkages into the regulatory equation. I hope they're pondering the figure below from this study which shows how (in a model) the overall probability of a banking failure (the red line) at first falls with increasing diversification and linking between different banks, but then abruptly begins rising when the density gets too high.


The implication is that there's likely to be a sweet spot in network density (labeled in the figure as diversification, this being the number of links a bank has to other banks) from which we should not stray too far, whether the big banks like it or not.

Thursday, June 16, 2011

Complexity economics

Complexity is one of the hottest concepts currently nipping at the fringes of economic research. But what is it? Richard Holt, David Colander and Barkley Rosser last year write a nice essay on the concept which makes some excellent points, and they're certainly optimistic about the prospects for a sea change in the way economic theory is done:
The neoclassical era in economics has ended and has been replaced by an unnamed era. We believe what best characterizes the new era is its acceptance that the economy is complex, and thus that it might be called “the complexity era.”
Indeed, something like this seems to be emerging. I've been writing about complexity science and its applications in physics, biology and economics for a decade, and the ideas are certainly far more fashionable now than they were before.

But again -- what is complexity? One key point that Holt and colleagues make is that complexity science recognizes and accepts that many systems in nature cannot be captured in simple and timeless equations with elegant analytical solutions. That may have been true with quantum electrodynamics and general relativity, but it's decidedly not true for most messy real world systems -- and this goes for physics just as much as it does for economics. Indeed, I think it is fair to say that much of the original impetus behind complexity science came out of physics in the 1970s and 80s as the field turned toward the study of collective organization in disordered systems -- spin glasses and glassy materials, granular matter, the dynamics of fracture and so on. There are lots of equations and models used in the physics of such systems, but no one has the intention or hope of discovering the final theory that would wrap up everything in a tidy formula or set of axioms.

Most of the world isn't like that. Rather, understanding means creating and using a proliferation of models, every one of which is partial and approximate and incomplete, which together help to illuminate key relationships -- especially relationships between the properties of things at a lower level (atoms, molecules, automobiles or people) and those at a higher level (crystal structures, traffic jams or aggregate economic outcomes).

Holt and colleagues spend quite some time discussing definitions of complexity, a task that I'm not sure is really worth the effort. But they do arrive at a useful distinction between three broad views -- a general view, a dynamic view, and a computational view. The second of these seems most interesting and directly related to emerging research focusing on instabilities and rich dynamics in economic systems. As stated by Rosser, a system is "dynamically complex" if...
it endogenously (i.e. on its own, and not because of external interference) does not tend asymptotically to a fixed point, a limit cycle, or an explosion.
In other words, a dynamically complex system never settles down into one equilibrium state, but has rich internal dynamics which persist. I'm not sure this definition covers all the bases, but it comes close and certainly strikes in the right direction.

Holt, Colander and Rosser go on to outline a number of areas where the complexity viewpoint is currently altering the landscape of economic research. I wouldn't quibble with the list: evolutionary game theory is bringing institutions more deeply into economic analysis, ecological economics is actually bringing the consideration of biology into economics (imagine that!), behavioural economics is taking account of how real people behave (again, what a thought!), agent-based models are providing a powerful alternative to analytical models, and so on.

This is all insightful, but I think perhaps one point could be more strongly emphasized -- the absolute need to recognize that what happens at higher macro-levels in a system often depends in a highly non-intuitive way on what happens at the micro-level. One of the principle barriers to progress in economics and finance, in my opinion, has been the the systematic effort by theorists over decades to avoid facing up to this micro-to-macro problem, typically through various analytical tricks. The most powerful trick -- a pair of tricks, really -- is to assume 1) that individuals are rational (hence making the study of human behaviour a problem not of psychology but of pure mathematics) and 2) assuming (in what is called the representative agent method) that the the behaviour of collective groups, indeed entire markets and economies, can be calculated as the simple sum of the rational actions of the people making it up.

The effect of this latter trick is actually quite amazing -- it eliminates from the problem, by definition, all of the interesting interactions and feed backs between people which make economies and markets rich and their dynamics surprising. Having done this, economics becomes a mathematical task of exploring the nature of rational behaviour -- it essentially places the root of complex collective economic outcomes inside the logical mind of the individual.

To put it most simply, this way of thinking tends to attribute outcomes in collective systems directly to properties of parts at the individual level, which is a terrific mistake. This might sometimes be the case. But we know from lots of examples in physics, biology and computer science that very simple things, in interaction, can give rise to astonishing complexity in a collective group. We should expect the same in social science: many surprising and non-intuitive phenomena at the collective level may reflect nothing tricky at all in the behaviour of individuals, but only the tendency for rich structures and dynamics to emerge in collective systems, created by myriad pathways of interaction among the system's parts.

Taking this point seriously is what I think most of complexity science -- as applied to social systems -- is about. It's certainly central to everything I write about under the phrase the "physics of finance." I take physics in the broad sense as the study of how organization and order and form well up in collective systems. It so happened that physics started out on this project in the context of physical stuff, electrons, atoms, molecules and so on, but that's merely a historical accident. The insights and methods physics has developed aren't bound by the nature of the particular things being discussed, and this project of understanding the emergence of collective order and organisation goes well beyond the traditional subject matter of physics.

Holt, Colander and Rosser make one other interesting point, about the resistance of macro-economists in general to the new ways of thinking:
Interestingly, these cutting edge changes in micro theory toward inductive analysis and a complexity approach have not occurred in macroeconomics. In fact, the evolution of macroeconomic thinking in the United States has gone the other way. By that, we mean that there has been a movement away from a rough and ready macro theory that characterized the macroeconomics of the 1960s toward a theoretically analytic macro theory based on abstract, representative agent models that rely heavily on the assumptions of equilibrium. This macro work goes under the name new Classical, Real Business cycle, and the dynamic stochastic general equilibrium (DSGE) theory, and has become the mainstream in the U.S.
This is quite depressing, of course, but if most of what Holt and colleagues write in their essay is true, it cannot possibly stay this way for long. Economics as a whole is changing very rapidly and macro can't remain as it is indefinitely. Indeed, there are already a number of researchers aiming to build up macro-economic models "from the bottom up" -- see this short essay by Paul De Grauwe, for example. All this spells certain near-term doom for the Rational Expectations crowd, and that doom can't come a moment too soon.

Wednesday, June 15, 2011

Slippery bankers slither through the constraints...

A couple months ago I wrote this short article in New Scientist magazine (it seems freely available without subscription). It pointed out a troubling fact about recent measures proposed to stop bankers (and CEOs and hedge fund managers, etc.) from taking excessive risks which bring them big personal profits in the short run while saddling their firms (and taxpayers) with losses in the long run. The problem, as economists Peyton Young and Dean Foster have pointed out, is that none of these schemes will actually work. If executives keep tight control on details about how they are running their firms and how they are investing, they can always game the system by making it look outwardly to others that they're not taking excessive risks. There's no way to control it without much greater transparency so shareholders or investors can see clearly what strategies executives are using.

Here's the gist of the article:
You might think that smarter rules could get around the problem. Delay the bonuses for five years, perhaps, or put in clauses allowing the investor to claw back pay if it proves undeserved. But it won't work. Economists Dean Foster of the University of Pennsylvania in Philadelphia and Peyton Young of the University of Oxford have now extended Lo's argument, showing that the problem is simply unsolvable as long as investors cannot see the strategies used by managers (Quarterly Journal of Economics, vol 125, p 1435).

As they point out, delaying bonuses for a few years may deter some people as the risky strategies could explode before the period is up. But a manager willing to live with the uncertainty can still profit enormously if they don't. On average, managers playing the game still profit from excess risk while sticking the firm with the losses.

So-called claw back provisions, in the news as politicians ponder how to fix the system, will also fail. While sufficiently strong provisions would make it unprofitable for managers to game the system, Young and Foster show that such provisions would also deter honest managers who actually do possess superior management skills. Claw back provisions simply end up driving out the better managers that incentives were meant to attract in the first place.
This work by Young and Foster is among the more profound things I've read on the matter of pay for performance and the problems it has created. You would think it would be front and center in the political process of coming up with new rules, but in fact it seems to get almost no attention whatsoever. From this article in today's Financial Times, it seems that bankers have quite predictably made some quick adjustments in the way they get paid, conforming to the letter of new rules, without actually changing anything:
Bank chiefs' average pay in the US and Europe leapt 36 per cent last year to $9.7m, according to data compiled for the Financial Times, despite variable performance across the sector....

Regulators have declined to impose caps on bank pay, instead introducing changes they believe will limit incentives to take excessive risks. That has led many banks to increase fixed salaries, reduce employees’ reliance on annual bonuses and defer cash and stock awards over several years.

“The real story around pay is the progress on ensuring bonuses are deferred, paid in shares and subject to clawback and performance targets, rather than the headline figure,” said Angela Knight, British Bankers’ Association chief executive.
 It's just too bad that we already know it won't work.

The Fear Index rises...

So far this month the so-called Fear Index -- the Chicago Board Options Exchange's volatility index, the VIX -- has risen by 24%. The VIX is a measure, roughly speaking, of how volatile market participants expect the markets to be over the next month, and these people, apparently, have growing uncertainty about a whole lot of things. What's going to happen with Greece and is the Eurozone itself safe? In the US, what happens now that the Federal Reserve has stopped its program of "quantitative easing" -- i.e. increasing the supply of money?

In the Financial Times, Gavyn Davies suggests the world's economic leaders are "out to sea" and simply unable to coordinate in their actions. The situation, he suggests, has been...
...triggered by the realisation that policy-makers around the world are no longer in any condition to rescue the global economy from a further slowdown, should that occur. Economies, and therefore markets, are currently flying without an automatic safety net from either fiscal or monetary policy.  And that brings new risks, compared to the situation which has been in place for most of the period since the 2009.
This is the world as reflected in the financial press -- one in which emotions and fears and hunches and uncertainty play a huge, indeed central role. Hardly surprising.

What is surprising and rather odd, is how far this is from the view of academic economists who assume (most of them, at least) that irrational fears and crazy expectations have little to do with market outcomes, which actually reflect in some way individuals' rational assessments of future prospects. This is the Rational Expectations view of economics, currently still considered, amazingly enough, as the gold standard in economic theorizing. It's been dominant for around 40 years, since first proposed by economists John Muth and Robert Lucas. Fortunately, it is finally giving way to a more realistic and less rigid view (more on this below) which takes note of something that is quite important to human beings -- our ability to learn and adapt.

I've long found it hard to understand how the Rational Expectations idea has come to be taken seriously at all by economists. It's primary assertion is that the predictions of individuals and firms about the economic future, while they may not be completely identical, are not on average incorrect. People make random but unbiased errors, the idea goes, and so the average expectation is rational -- it captures the true model of how the economy works. This is indeed patently absurd in the face of historical experience: the average view of the future in 2005, for example, was not that a global financial and economic crisis lay just around the corner. Perhaps a determined Rational Expectations enthusiast would disagree. What evidence is there after all about what peoples' expectations really were at the time?

The standard defense of the Rational Expectations framework follows the same three-tiered logic used in defense of the Efficient Markets Hypothesis (not surprisingly, given the close link between the two ideas). The first defense is to assert that people are indeed rational. Given evidence that they are not, the second defense is to admit that, OK, people make all kinds of errors, but to assert that they make them in different ways; hence, they are not irrational on average. Of course, the behavioral economics crowd has now thoroughly trashed this defense, showing that people are systematically irrational (over confident, etc) in similar ways, so their errors won't cancel out. Hence, the third line of defense -- arbitrage. Starting with Milton Friedman and continuing for decades, defenders of the rational core of economic theory have argued that competition in the marketplace will drive the irrational out of the market as the more rational participants take their money. The strong prey upon the weak and drive them out of existence.

For example, in 1986, Robert Lucas wrote the following, expressing his view that anything other than rational expectations would be weeded out of the market and ultimately play no role:  
Recent theoretical work is making it increasingly clear that the multiplicity of equilibria ... can arise in a wide variety of situations involving sequential trading, in competitive as well as finite-agent games. All but a few of these equilibria are, I believe, behaviorally uninteresting: They do not describe behavior that collections of adaptively behaving people would ever hit on.
Alas, this evolutionary or adaptive defense turns out not to work either. As Andrei Shliefer and others have shown, an arbitrager would need access to an infinite amount of capital to be able to drive the irrational (so-called "noise traders") from the market, essentially because market "inefficiencies" -- mispricings due to the stupid actions of noise traders -- may persist for very long times. The market, as the famous saying goes, can stay irrational longer than you can stay solvent.

None of this has stopped most economists from plodding right on along behind the Rational Expectations idea, and they may well continue this way for another 50 years, stifling macroeconomics, with the rest of us paying the price (well beyond our tax dollars that go to fund such research). But there is fresh air in this field, let in by researchers willing to throw open some windows and let high theory be influenced by empirical reality.

It is obvious that expectations matter in economics. What happens in the future in markets or in the larger economy often depend on what people think is likely to happen. The problem with RE (Rational Expectations) isn't with the E but with the R. A far more natural supposition would be that people form their expectations not through any purely rational faculty but through a process of learning and adjustment, adaptively, as we do most other things. Indeed, a number of people have been exploring this idea in a serious way, through both models and experiments with real people. The results are eye-opening and demonstrate how much has been lost in a 40 year trance-like distraction at the (supposed) mathematical beauty of the rational expectations theory.

Cars Hommes has written a beautiful review of the area that is well worth close study. As he notes, there have been about 1000 papers published in the past 20 years on "bounded rationality" -- the finite mental capacity of real people -- and learning. These include agent-based models of markets in which the agents all use different, imperfect strategies and learn as they go, and other studies looking more closely at the process by which people form their expectations. Here are a few highlights:

* Studies from about 10 years ago explored the actual strategies followed by traders in a hog and beef market, and found empirical evidence for heterogeneity of expectations. One study estimated that "about 47% of the beef producers behave naively (using only the last price in their forecast), 18% of the beef producers behaves rationally, whereas 35% behaves quasi-rationally (i.e. use a univariate autoregressive time series model to forecast prices)."

Think of that -- in an ordinary hog and beef market we have roughly half the market participants behaving on the basis of an extremely simple heuristic.

* More recently, a number of researchers have used agent models to characterize data from markets for stock, commodities, foreign exchange and oil. As the review notes, "Most of these studies find significant time-variation in the fractions of agents using a mean-reverting fundamental versus a trend-following strategy."

That is, not only do the participants have distinct expectations, but those expectations and the basic strategies they follow shift over time. Not surprising really, but clearly not consistent with the rational expectations view.

* The review also points to recent seminal work by a team of physicists led by Fabrizio Lillo. This study looked at investors in a Spanish stock market and found that they fell into different groups based on their strategies, including trend followers and contrarians who bought against trends.

* And who says simple surveys can't be instructive? Various studies over 15 years using this traditional technique have found that financial experts "use different forecasting strategies to predict exchange rates. They tend to use trend extrapolating rules at short horizons (up to 3 months) and mean-reverting fundamentalists rules at longer horizons (6 months to 1 year) and, moreover, the weight given to different forecasting techniques changes over time." Similarly, for peoples' views on inflation, a study found that "heterogeneity in inflation expectations is pervasive..."

So then, why does the use of rational and homogeneous expectations continue in economics and finance? The review doesn't answer these questions, but in view of the massive quantities of data pointing to a much richer and complex process by which people form their expectations, the persistence of the rational expectations view in mainstream macroeconomic models -- this is my understanding, at least -- seems fairly scandalous.

So what's going on with the VIX? No one really knows, and I'm absolutely sure that the answer cannot emerge from any general equilibrium model with rational expectations. Times of uncertainty lead, in general, to two effects -- 1) a growing diversity in views, given the lack of information that would push most people toward one, and 2) a growing urgency to find some clue about the future, some guide to future behaviour. Urgency in the face of uncertainty leads many people -- as this study in Science from a couple of years ago showed -- to see what they take to be meaningful patterns even in purely random noise. Hence, we need a theory of adaptive expectations, and it needs to include irrational expectations. The good news is that this idea is finally getting a lot of attention, and the old faith that evolutionary competition in some form would enforce the tidy picture of rational expectations can be consigned to the dustbin of history.

         *** UPDATE ***

I just want to clarify that what I've written in this post doesn't begin to do justice to the rich material in Cars Hommes review article. I mentioned some of the earlier empirical work he discusses, but much of the paper reviews then results of extensive laboratory experiments, by his group and other groups, exploring how people actually do form expectations in situations where the experimenter has close control on the process in question. For example, a price fluctuates from period to period and volunteers are asked to predict it over 50 periods. They are told very little, except some facts about where the prices come from -- from a speculative market such as a stock market, or instead from a very different market with perishable goods. Skeletal information of this kind can make them expect positive feed backs, hence the potential for self-reinforcing trends, or quite the opposite.

There's nothing in these experiments that explains why they weren't done 30, 40, 50 years ago. Why didn't someone in the 1970s start such a program to really test the Rational Expectations idea? (Did they and I'm just not aware of it?). It's a shame because these experiments yield a wealth of insight, as well as directions for future work. I'll just quote from Hommes conclusions (my comments in brackets):
Learning to forecast experiments are tailor-made to test the expectations hypothesis, with all other model assumptions computerized and under control of the experimenter. Different types of aggregate behavior have been observed in different market settings. To our best knowledge, no homogeneous expectations model [rational or irrational] fits the experimental data across different market settings. Quick convergence to the RE-benchmark only occurs in stable (i.e. stable under naive expectations) cobweb markets with negative expectations feedback, as in Muth's (1961) seminal rational expectations paper. In all other market settings persistent deviations from the RE fundamental benchmark seem to be the rule rather than the exception.

Wednesday, June 8, 2011

The Earth is full

Sometimes New York Times columnist Thomas Friedman really does hit the nail on the head, as he does in his column today, The Earth is Full.

Economists have been glorifying unending economic growth for half a century, while systematically discounting (quite literally) the social and ecological costs, especially in the long term. So much economic analysis seems to take place in a weird vacuum in which the laws of physics and biology don't apply, but an increasing number of scientists and economists have begin questioning this, as Peter Victor did last year in Nature. As Victor pointed out, questioning economic growth is typically viewed as heresy, even though it has obvious malign consequences for the physical and biological environment on which we depend. In pursuing growth, we've increased our total throughput of raw materials and fuel by a factor of nearly 10 in the 20th century alone, increasing our spillage of degraded waste (including waste heat energy, see below) into the environment by a similar factor. So it is perhaps not at all surprising that a different article in Nature, as Victor notes, concludes that...
"Humanity has gone beyond the 'safe operating space' of the planet with respect to climate change, nitrogen loadings and biodiversity loss, and threatens to do so with six other major global environmental issues."
Friedman puts the same point -- that we face real consequences from physical and biological limits, no matter how clever our economics -- in more vibrant terms:
"You really do have to wonder whether a few years from now we’ll look back at the first decade of the 21st century — when food prices spiked, energy prices soared, world population surged, tornadoes plowed through cities, floods and droughts set records, populations were displaced and governments were threatened by the confluence of it all — and ask ourselves: What were we thinking? How did we not panic when the evidence was so obvious that we’d crossed some growth/climate/natural resource/population red lines all at once?"
Of course, the natural response in confronting such red lines is to suggest that with sufficient cleverness we might find a way around them, perhaps even grow our way around them -- invest in business and technology now and by some miracle discovery perpetual motion machines. This explains, I think, we many people have treated with scorn Germany's decision to phase out its plans for nuclear energy in the next decade. Why would they restrict their options, especially given the real limits we face in CO2 emissions? Quite possibly or even probably the German decision was a feckless and emotional response in the wake of the Japanese Fukushima disaster, but it may also, even if unintentionally, respect some unalterable facts associated with Friedman's title -- The Earth is Full.

A couple of years ago, Nick Cowern and Chihak Ahn estimated the amount of energy we humans dissipate to the environment through our energy use as heat. Currently such energy plays a very small and largely insignificant role in the overall energy budget. But our energy use is growing at 2% per year globally, and in 100 or 150 years, they showed, will be enough to begin climate warming again, even in a dream world in which we control warming associated with CO2 emissions. The conclusion is that we will soon simply be using too much energy, which, by the laws of thermodynamics, will heat the Earth environment. We're becoming too big. The Earth is too full -- of us. So, we might well switch to nuclear energy, and doing so might help us counter warming from CO2 for a time, but it doesn't get us away from the rock bottom problem that we are (or will soon be) using and dissipating too much energy.  The abstract of the Cowern-Ahn paper describes the lesson of this analysis:
Global warming arises from 'temperature forcing', a net imbalance between energy fluxes entering and leaving the climate system and arising within it. Humanity introduces temperature forcing through greenhouse gas emissions, agriculture, and thermal emissions from fuel burning. Up to now climate projections, neglecting thermal emissions, typically foresee maximum forcing around the year 2050, followed by a decline. In this paper we show that, if humanity's energy use grows at 1%/year, slower than in recent history, and if thermal emissions are not controlled through novel energy technology, temperature forcing will increase indefinitely unless combated by geo-engineering. Alternatively, and more elegantly, humanity may use renewable sources such as wind, wave, tidal, ocean thermal, and solar energy that exploit energy flows already present in the climate system, or act as effective sinks for thermal energy.
The part I've marked in bold and italic is key. Nuclear energy brings another source of energy into play on Earth, by rapidly releasing energy which would otherwise be sequestered in nuclear forces for long periods of time. If we continue to increase our energy use, as growth enthusiasts assume we must, the only way to avoid climate warming will be to somehow counter our increased dissipation by reducing energy dissipation from other sources in the environment. Nuclear energy won't do that.

Just another way to see that there are indeed physical limits to growth, for us just as much as for a colony of bacteria expanding into a jar of sugar water. Why do so many people find this hard to accept?

Monday, June 6, 2011

So quickly we forget...

One of the lessons we'd supposedly learned from the financial crisis is the need to look seriously at systemic risks -- risks not tied to the well-being of just any one institution, but which emerge out of their interconnections and interdependence. Systemic risk is the new buzz phrase for research on banking network stability. The Dodd-Frank financial reforms still under negotiation aim to improve systemic stability (in some small way, at least), in part by raising capital requirements on individual banks.

It's doubtful this will be enough. As a number of studies (notably this one) have shown (see further comment below), the propagation of distress through a banking network depends on more than the situation facing each individual bank -- the density of links between banks matters too. More links of financial dependence between banks can share risks and make a network more stable, in some cases, but they can also make the system more fragile. So raising capital requirements isn't enough. Still, it's at least a step in the right direction.

After all, if there's one thing we've learned about the risks linked to rare fat-tail-associated crises is that years of apparent profitability can be wiped out in a few tumultuous weeks. Two years ago financial writers couldn't repeat often enough what they'd learned from Nassim Taleb's Black Swan -- that there's real danger in pushing up leverage and ignoring those tail risks in pursuit of short term profits, and that "profitability," sensibly considered, has to refer to the long term. Now they have apparently forgotten, as this article in today's New York Times makes clear. Bank stocks are down. They're worried about all the new regulations eating into their profits. Especially those pesky capital requirements:
... new international rules now being developed to require major institutions to hold more capital as a buffer against future financial crises will also erode profitability. That is because money set aside as ballast is cash that will not be available to lend out or pay dividends or buy back stock.
This is only true if you take "profitability" in a special, narrow, short term sense. That money set aside as ballast might just enable the bank to survive tough times in the future, and might also make the entire banking network itself more stable, as well as the broader economy. The kind of "profitability" this paragraph is talking about isn't what we need.

Coincidentally, the paper I mentioned above -- which studies a model network of interdependent banks -- explores a scenario that economist Mark Thoma describes quite beautifully (without apparently having it in mind, though perhaps he did). The point is that there's an interesting trade-off we don't understand very well between the increasing safety that comes from diversification as the network grows more dense, and the simultaneously increasing danger of cascading distress which dense ties make more likely. As Thoma puts it,
When I think of an interconnected network that can spread problems from institution to institution, there are two possible scenarios. First, think of a drop of poison in the ocean. The ocean is so big that even a powerful poison can be neutralized as it spreads.... This is much like traditional financial risk sharing where large individual losses are spread throughout the system so that the losses to any one person are tiny.

But now think of a poison that acts more like an infection. As it spreads it does not become less toxic, it continues to be lethal to anyone who comes in contact with it. In this case, you want to break the network connections -- i.e. compartmentalize -- as soon as possible to prevent the spread of the lethal infection. You may even want to have the compartments set in advance if you cannot sever the ties fast enough.

Friday, June 3, 2011

The trouble with today's macroeconomics...

This lengthy post by Willem Buiter -- currently the Chief Economist for Citibank -- is now a couple years old, but it touches on many shortcomings of current academic economics modeling based on the notion of equilibrium, excessive assumptions about rationality, inadequate attention to non-linearities and so forth. I list a few choice excerpts below, but the whole thing is very much worth a read.

On the value of the rational expectations revolution which has dominated mainstream macroeconomic modeling for the past 30 years:
... the typical graduate macroeconomics and monetary economics training received at Anglo-American universities during the past 30 years or so, may have set back by decades serious investigations of aggregate economic behaviour and economic policy-relevant understanding. It was a privately and socially costly waste of time and other resources.

Most mainstream macroeconomic theoretical innovations since the 1970s (the New Classical rational expectations revolution associated with such names as Robert E. Lucas Jr., Edward Prescott, Thomas Sargent, Robert Barro etc, and the New Keynesian theorizing of Michael Woodford and many others) have turned out to be self-referential, inward-looking distractions at best.  Research tended to be motivated by the internal logic, intellectual sunk capital and esthetic puzzles of established research programmes  rather than by a powerful desire to understand how the economy works...  
 On the matter of mathematical assumptions turning up in dynamic programming problems, and their unjustified inclusion as assumptions about the behaviour of real people in real markets:
The common practice of solving a dynamic general equilibrium model of a(n) (often competitive) market economy by solving an associated programming problem, that is, an optimisation problem, is evidence of the fatal confusion in the minds of much of the economics profession between shadow prices and market prices and between transversality conditions that are an integral part of the solution to an optimisation problem and the long-term expectations that characterise the behaviour of decentralised asset markets.  The efficient markets hypothesis assumes that there is a friendly auctioneer at the end of time – a God-like father figure – who makes sure that nothing untoward happens with long-term price expectations or (in a complete markets model) with the present discounted value of terminal asset stocks or financial wealth. 

... The future surely belongs to behavioural approaches relying on empirical studies on how market participants learn, form views about the future and change these views in response to changes in their environment, peer group effects etc.  Confusing the equilibrium of a decentralised market economy, competitive or otherwise, with the outcome of a mathematical programming exercise should no longer be acceptable.
 On the systematic obliteration of non-linearities and positive feed backs in today's macro models, even in the face of obvious empirical evidence for their importance:

If one were to hold one’s nose and agree to play with the New Classical or New Keynesian complete markets toolkit, it would soon become clear that any potentially policy-relevant model would be highly non-linear, and that the interaction of these non-linearities and uncertainty makes for deep conceptual and technical problems. Macroeconomists are brave, but not that brave.  So they took these non-linear stochastic dynamic general equilibrium models into the basement and beat them with a rubber hose until they behaved.  This was achieved by completely stripping the model of its non-linearities and by achieving the transsubstantiation of complex convolutions of random variables and non-linear mappings into well-behaved additive stochastic disturbances.

Those of us who have marvelled at the non-linear feedback loops between asset prices in illiquid markets and the funding illiquidity of financial institutions exposed to these asset prices through mark-to-market accounting, margin requirements, calls for additional collateral etc.  will appreciate what is lost by this castration of the macroeconomic models.  Threshold effects, critical mass, tipping points, non-linear accelerators – they are all out of the window. ...The practice of removing all non-linearities and most of the interesting aspects of uncertainty from the models that were then let loose on actual numerical policy analysis, was a major step backwards.  I trust it has been relegated to the dustbin of history by now in those central banks that matter.
 Finally, on the usefulness of so-called Dynamic Stochastic General Equilibrium models, the current state of the art of the macroeconomics community:
Charles Goodhart, who was fortunate enough not to encounter complete markets macroeconomics and monetary economics during his impressionable, formative years, but only after he had acquired some intellectual immunity, once said of the Dynamic Stochastic General Equilibrium approach which for a while was the staple of central banks’ internal modelling: “It excludes everything I am interested in”. He was right.  It excludes everything relevant to the pursuit of financial stability.

Thursday, June 2, 2011

Universal switching patterns in market trends

One of the more intriguing simple facts about market fluctuations is their self-similarity in time. The statistics of market fluctuations over a short time, if stretched out in time and rescaled in value, look identical to fluctuations over longer times. What's happening over seconds is somehow intimately linked up with what's happening over hours, days or months.

This geometric order follows from no end of modern studies of the statistics of market returns, which find power laws all over the place. Mandelbrot himself gave a nice exploration of self-similarity in market movements more than a decade ago in the wake of the meltdown of Long Term Capital Management, which was caused in part by an inadequate appreciation of the likelihood of sudden large fluctuations in any system with fluctuations of this sort. Now everyone knows about Black Swans and fat tails, but there may be much deeper secrets lurking in the delicate structure of market ups and downs.

That's the contention at least of Tobias Preis and colleagues in a recent paper in the Proceedings of the National Academy of Sciences. It is not by any means an easy study to digest and understand, but it seems quite important as it hints at some deep regularities in the way markets switch between upward and downward trends over a huge range of timescales from seconds up to years. Indeed, it suggests that some very fundamental collective pattern of human decision making -- as individuals sense the shifting views of others and become more likely to act in the same way -- lie behind many market rallies and crashes, regardless of the timescale on which they play out.

Briefly, what Preis and colleagues have done is to analyze several different data sets for price movements in markets on times ranging from 10 milliseconds (high frequency trading in futures markets) up to around 10 billion milliseconds, or several decades (the S&P Stock Market Index over a 40 year period). Their analysis focuses on what they refer to as "switching events" -- an upward trend which breaks off into a negative one, or vice versa. These can be identified (or defined) in a mathematically unambiguous way. Focus your microscope in on a timescale of duration Δt. The price is bouncing up and down, with p(t) giving the value versus time. The researchers then define p(t) to be ...
a local maximum of order Δt if there is no higher transaction price in the interval t − Δt ≤ t ≤ t + Δt, and is defined to be a local minimum of order Δt if there is no lower transaction price in this
interval.
In other words, every interval must have both a maximum and a minimum, and these are taken as potential switching points - moments at which the market moved from a trend in one direction to another. They go on to study the volume of individual trades in the market record as you move closer to or further away from these switching points. The payoff is that they find essentially the same mathematical pattern across all time scales, even when moving between the futures market and the stock market -- in effect, a signature of switching points that shows up in the volume of trades and predicts approaching changes.

The basic pattern can be expressed in simple formulae, but the idea comes across even better in the diagram below. It shows how the volume of individual trades increases as the moment of a switching point approaches. Remember, this isn't just at one timescale, but is averaged over many timescales (this figure is for data at shorter times taken from a futures market):



I apologize for the resolution, but you can see the basic point -- the middle of the diagram is the moment of a switching point, and the volume per trade as one approaches this point becomes very large. That is, people start trading in larger and larger amounts as you approach these points. They do so before the switch actually happens. So their behaviour is, in effect, an early warning sign of an impending switch in the trend.

This is a highly provocative finding. The paper also explores some models inspired from physics which might explain how such a universal pattern can occur. In essence, the kinds of models that work reflect the avalanche behaviour as the actions of some people feed into and influence the actions of others. On any timescale, the same kind of dynamic may be at work -- many people follow a trend for a time, then, a few become wary of a reversal and act accordingly. Others begin sensing those peoples' actions and follow suit. A growing avalanche of behaviour reverses the trend -- and simultaneously launches one off in the opposite direction.

As Preis and colleagues conclude,
In summary, we have seen that each trend—microtrend and macrotrend—in a financial market starts and ends with a unique switching process, and each extremum shares properties of macroscopic cooperative behavior. We have seen that the switching mechanism has no scale, for time scales varying over nine orders of magnitude down to the smallest possible time scale —the scale of single transactions measured in units of 10 ms. Thus, the well known catastrophic bubbles occurring on large time scales—such as the most recent financial crisis—may not be outliers but in fact single dramatic events caused by the inherent, scale-free behavior related to the formation of increasing and decreasing trends on time scales from the very large down to the very small.
We might call this Self-Similarity 2.0 for market fluctuations, certainly inspired by Mandelbrot's discoveries of 50 years ago, but also going well beyond.