Presently most stock markets, such as the New York
Stock Exchange, and most futures markets, such as the Chicago Mercantile
Exchange, use a market design called the continuous limit order book."Continuous" means that whoever accepts a bid or ask first gets the trade. This can create a race that doesn’t have an economic purpose. (Billions have been spent on optical fiber cables and microwave channels to shave milliseconds off how quickly traders can compare prices in NY and Chicago.) It can also make the market thinner in costly ways. (Liquidity providers have to quote wider bid-ask spreads to protect themselves against getting ‘sniped’ if there is a news event and they don’t adjust their quotes fast enough.)
An exciting market design paper documents this and suggests a solution (run a batch market every second, so that traders would have to compete on price rather than time):
The High-Frequency Trading Arms Race: Frequent Batch Auctions as a Market Design Response
by Eric Budish, Peter Cramton, and John Shim
Abstract: We propose frequent batch auctions – uniform-price double auctions conducted at frequent but discrete time intervals, e.g., every 1 second – as a market design response to the high-frequency trading arms race. Our argument has four parts. First, we use millisecond-level direct-feed data from exchanges to show that, under the continuous limit order book market design that is currently predominant, market correlations that function properly at human-scale time horizons completely break down at high frequency time horizons. Second, we show that this correlation breakdown creates purely technical arbitrage opportunities, which in turn creates an arms race to exploit such opportunities. Third, we develop a simple theory model motivated by these empirical facts. The model shows that the arms race is not only per se wasteful, but also leads to wider spreads and thinner markets for fundamental investors. Last, we use the model to show that batching eliminates the arms race, both because it reduces the value of tiny speed advantages and because it transforms competition on speed into competition on price. Consequently, frequent batch auctions lead to narrower spreads, deeper markets, and increased social welfare.
*************
Figure 1.1 of the paper beautifully illustrates why speed pays: it only takes milliseconds for a price movement on index futures in Chicago to be matched by a corresponding price change on the exchange-traded index fund in NY. Whoever sees that discrepancy first can earn the full arbitrage profits. (In a batch market every second, traders would have to compete for these...)
Tim Harford has a nice summary of the paper
here.
***************
The NY Times covered a different kind of early information (seconds not milliseconds, involving survey results, not prices) in this pair of before and after stories:
Thomson Reuters to Suspend Early Peeks at Key Index,
Fair Play Measured in Slivers of a Second
From the second story:
"On Friday morning, Thomson Reuters released the latest University of Michigan Consumer Sentiment Index, as it does twice a month. But this time was different. As a result of a settlement Thomson Reuters reached this week with New York’s attorney general, Eric T. Schneiderman, a select group of its customers didn’t get the two-second advance release they’d been buying.
...
"The difference was arresting. On Friday, just 500 shares of a leading Standard & Poor’s 500 exchange-traded fund traded during the first 10 milliseconds of the two-second window before the release of the University of Michigan data to Thomson Reuters’ regular clients, according to the market research firm Nanex. A year ago, on July 13, 2012, 200,000 shares traded during that 10-millisecond period, Nanex said."
*******************
I shared a draft of this post with Eric Budish, who replied as follows:
"If you wanted to hook this paper into your own work, here are some potential connections (we chatted about these connections last time I was in Stanford):
-
Serial vs. batch processing: we are criticizing continuous limit order books, which process messages one-at-a-time in serial and hence induce speed races, and proposing a batch auction in its place. This reminds me of your 1997 JPE paper on serial vs. batch processing …
-
Congestion: the speed race creates congestion for the exchange’s computers, which leads to a backlog in processing messages, which leads to traders being confused about the state of their orders, which creates uncertainty and occasionally bigger problems (backlog is most severe at times of especially high market activity, when reliance on low-latency information is also at its highest). We talk about this a bit in Section 8 of the paper
-
Sniping: our empirical work and theory model highlight that an important cost of liquidity provision under the continuous limit order book is that liquidity providers are constantly getting “sniped” – when there is an arbitrage opportunity, such as the one you can see in Figure 1.1 of the paper, some poor liquidity provider is on the other side of that arbitrage opportunity and is losing money … he ultimately passes this cost on to fundamental investors via a wider bid-ask spread
-
Thickness: continuous time is the ultimate thin market, in most dt’s there is no activity whatsoever …
Not sure that any of this is worth mentioning, but it’s fun to see all of these themes from your work coming up in so different a context."