Sunday, September 6, 2009
Matching children to classes
Slate has a story, Should parents meddle in their kids' classroom assignments?, which in turn prompted a post, the class matching problem, by Joshua Gans on his economics-and-parenting blog Game Theorist. (Gans' contribution to the "-onomics" library is called Parentonomics. He's also a prolific contributor to the econ- literature, and is likely to be a sabbatical visitor at Harvard in 2010. See my earlier post Market for ideas.)
In elementary school, kids have only one class. In high school, you have to assign each kid a bundle of classes. That makes the problem both harder—because bundles are hard, and there are complementarities—and easier, because one class isn’t divisible, but bundles are…you can give a high school kid some good classes and some bad ones, and have it come out to an ok schedule. Eric Budish has made some progress on this, although without worrying about putting friends in the same class (see my earlier post Course allocation, by Eric Budish )
Friday, May 18, 2018
Eric Budish on (expensive) blockchain technology
The Economic Limits of the Blockchain
by Eric Budish
May 3, 2018
Abstract: The amount of computational power devoted to blockchains such as Bitcoin’s must simultaneously satisfy two conditions in equilibrium: (1) a zero-profit condition among miners,who engage in a rent-seeking competition for the prize associated with adding the next block to the chain; and (2) an incentive compatibility condition on the system’s vulnerability to a“majority attack”, namely that the computational costs of such an attack must exceed the benefits. Together, these two equations imply that (3) the recurring, “flow”, payments to miners for running the blockchain must be large relative to the one-off, “stock”, benefits of attacking it. The constraint is softer (i.e., stock versus stock) if both (i) the mining technology used to run the blockchain is both scarce and non-repurposable, and (ii) any majority attack is a “sabotage” in that it causes a collapse in the economic value of the blockchain; however, reliance on non-repurposable technology for security and vulnerability to sabotage each raise their own concerns, and point to specific collapse scenarios. Overall the results place potentially serious economic constraints on the applicability of the Nakamoto (2008) blockchain innovation. The anonymous, decentralized trust enabled by the blockchain, while ingenious, is expensive.
Tuesday, June 9, 2020
Sunday, October 4, 2009
Course allocation by Budish, updated
The Combinatorial Assignment Problem: Approximate Competetive Equilibrium From Equal Incomes,
and Eric himself is now at Chicago's Booth School of Business.
Here is my previous post on the first version of that paper (which was Eric's jobmarket paper).
Sunday, October 31, 2021
Market Design by Nikhil Agarwal & Eric Budish (forthcoming in the Handbook of Industrial Organization)
Here's an NBER working paper that will appear in the Handbook of Industrial Organization:
Market Design by Nikhil Agarwal & Eric Budish
NBER WORKING PAPER 29367, DOI 10.3386/w29367, October 2021
Abstract: "This Handbook chapter seeks to introduce students and researchers of industrial organization (IO) to the field of market design. We emphasize two important points of connection between the IO and market design fields: a focus on market failures—both understanding sources of market failure and analyzing how to fix them—and an appreciation of institutional detail.
"Section II reviews theory, focusing on introducing the theory of matching and assignment mechanisms to a broad audience. It introduces a novel “taxonomy” of market design problems, covers the key mechanisms and their properties, and emphasizes several points of connection to traditional economic theory involving prices and competitive equilibrium.
"Section III reviews structural empirical methods that build on this theory. We describe how to estimate a workhorse random utility model under various data environments, ranging from data on reported preference data such as rank-order lists to data only on observed matches. These methods enable a quantification of trade-offs in designing markets and the effects of new market designs.
"Section IV discusses a wide variety of applications. We organize this discussion into three broad aims of market design research: (i) diagnosing market failures; (ii) evaluating and comparing various market designs; (iii) proposing new, improved designs. A point of emphasis is that theoretical and empirical analysis have been highly complementary in this research"
Here's the first paragraph:
"Textbook models envision markets as abstract institutions that clear supply and demand. Real markets have specific designs and market clearing rules. These features affect market participants and their allocations in various ways – they determine the actions an agent can take, the incentives for taking those actions, the information environment, the interactions between agents’ actions, and, ultimately, the final allocation. Well-designed markets have rules that coordinate and incentivize behavior in ways that lead to desirable outcomes. But it is not a given that all markets have good design. The Market Design field studies these rules in order to understand their implications, to identify potential market failures, and to remedy them by designing better institutions."
Sunday, December 17, 2017
High frequency ticket buying, by scalper bots
They talk to a bunch of good economists. Here's Eric Budish on the subject of why it's odd that tickets are "underpriced," in the sense that it's profitable for software bots to buy them up before the humans can get in the act, and then resell them...
"BUDISH: That’s competition on — to an economist — a strange dimension: competition on speed rather than price; that’s the connection to my stuff on high-frequency trading. It’s competition, but it’s not a productive form of competition."
Read (or listen) to the whole thing at the link.
Saturday, April 14, 2018
Are financial markets too fast? A discussion of high speed trading (with Eric Budish)
"On this episode of The Big Question, Chicago Booth Review's Hal Weitzman talks with Chicago Booth professor of economics Eric Budish, Chicago Trading Company's Steve Crutchfield, and former Commodity Futures Trading Commission commissioner Sharon Bowen about how speed affects financial markets and what, if anything, we should do about it."
Eric points out that competition among exchanges has worked well in driving down trading fees, and poorly in selling access--"co-location"--since each exchange has a monopoly on selling speedy access to its data.
Wednesday, December 9, 2009
Course allocation at HBS
The latest example analyzes the way second year MBA courses at the Harvard Business School are assigned.
The Multi-unit Assignment Problem: Theory and Evidence from Course Allocation at Harvard by Eric Budish and Estelle Cantillon
Abstract: "This paper uses data consisting of agents. strategically reported preferences and their underlying true preferences to study strategic behavior in the course allocation mechanism used at Harvard Business School. We show that the mechanism is manipulable in theory, manipulated by students in practice, and that these manipulations cause meaningful welfare losses. However, we also find that ex-ante welfare is higher than under the Random Serial Dictatorship (RSD), which is the only known mechanism that is anonymous, strategyproof and ex-post efficient. We trace the poor ex-ante performance of RSD to a phenomenon, "callousness", specific to multi-unit assignment and unrelated to risk attitudes. We draw lessons for the design of multi-unit assignment mechanisms and for market design more broadly."
A related paper is Budish's proposal for an alternative mechanism: The Combinatorial Assignment Problem: Approximate Competitive Equilibrium from Equal Incomes.
(And here's my earlier post on an early version of that paper.)
Thursday, April 23, 2009
Course allocation, by Eric Budish
Eric Budish, who defended his Ph.D. dissertation at Harvard this week, has made a substantial, practical dent in the problem. His motivation comes from a detailed study, with Estelle Cantillon, of how classes are assigned to second year MBA students at the Harvard Business School, and how students approach this assignment problem strategically: Strategic Behavior in Multi-Unit Assignment Problems: Theory and Evidence from Course Allocations .
Largely motivated by what they learn about the good and not so good properties of the HBS mechanism, Eric then proposes a new mechanism: The Combinatorial Assignment Problem: Approximate Competitive Equilibrium from Equal Incomes. Eric's work, like market design in general, is eclectic. Among other things, he formulates new notions of what constitutes "fair" outcomes in cases hedged in by the impossibility results that abound when allocating indivisible goods.
Although allocating multiple indivisible items to each student makes the standard economic goals involving efficiency and incentives more difficult to achieve, it gives the designer somewhat more leeway to think about fairness, since although class places are indivisible, the package of classes that each student gets is not. And Eric’s investigations of existing course allocation institutions has convinced him that concerns about avoiding excessive ex-post unfairness are an important constraint on what kinds of mechanisms can be implemented in practice.
Eric's mechanism looks like it has legs, and may be ready for practical implementation in the not so distant future. Perhaps he'll get a chance to have more than the usual impact next year when he brings market design to U. Chicago's Booth School of Business (until recently Chicago GSB).
Welcome to the club, Eric.
Friday, April 1, 2011
Matching and Price Theory
Friday, May 6:
9:00 - 9:50 am The College Admissions Problem with a Continuum of Students
Eduardo Azevedo (Harvard University) and Jacob Leshno (Harvard University)
9:50 - 10:40 am Stability and Competitive Equilibrium in Trading Networks
John Hatfield (Stanford University), Scott Kominers (Harvard University), Michael Ostrovsky (Stanford University), Alexandru Nichifor (University of Maastricht) and Alexander Westkamp
(University of Bonn)
10:55 - 11:45 am Sorting and Factor Intensity: Production and Unemployment across
Skills, Jan Eeckhout (University of Pennsylvania) and Philipp Kircher (London School of Economics)
11:45 - 12:20 pm Discussion led by Eric Budish (University of Chicago)
12:20 - 1:50 pm Lunch
1:50 – 2:40 pm Hedonic Price Equilibria, Stable Matching, and Optimal Transport: Equivalence, Topology, and Uniqueness, Pierre-André Chiappori (New York University), Robert McCann (University of Toronto), and Lars Nesheim (University College London)
2:40 – 3:30 pm Distinguishing the Payoffs of Upstream and Downstream Firms in Matching Games, Jeremy Fox (University of Chicago)
3:30 – 4:05 pm Discussion led by Ariel Pakes (Harvard University)
4:20 – 5:10 pm Strategyproofness and Manipulability in Large Economies, Eduardo Azevedo (Harvard University) and Eric Budish (University of Chicago)
5:10 – 5:45 pm Discussion led by John Hatfield (Stanford University)
Saturday, May 7:
9:00 - 9:50 am Decentralized Matching with Aligned Preferences, Muriel Nederle (Stanford University) and Leeat Yariv (California University of Technology)
9:50 - 10:40 am TBA, Ilya Segal (Stanford University)
10:40 – 11:15 am Discussion led by Alvin Roth (Harvard University)
11:30 - 12:30 pm Panel and Audience Discussion on Open Questions, Gary Becker (University of Chicago), James Heckman (University of Chicago), Paul Milgrom (Stanford University) and Alvin Roth (Harvard University)
Register online here
Saturday, July 17, 2021
The race to transact in high frequency trading by Aquilina, Budish, and O'Neill
High frequency traders are constantly involved in races to trade on existing bids and asks or to cancel those bids and asks as they become stale. Here's an NBER working paper that let's us look in on the action.
Quantifying the High-Frequency Trading "Arms Race" by Matteo Aquilina, Eric Budish & Peter O'Neill NBER WORKING PAPER 29011 DOI 10.3386/w29011 July 2021
Abstract: "We use stock exchange message data to quantify the negative aspect of high-frequency trading, known as “latency arbitrage.” The key difference between message data and widely-familiar limit order book data is that message data contain attempts to trade or cancel that fail. This allows the researcher to observe both winners and losers in a race, whereas in limit order book data you cannot see the losers, so you cannot directly see the races. We find that latency-arbitrage races are very frequent (about one per minute per symbol for FTSE 100 stocks), extremely fast (the modal race lasts 5-10 millionths of a second), and account for a remarkably large portion of overall trading volume (about 20%). Race participation is concentrated, with the top 6 firms accounting for over 80% of all race wins and losses. The average race is worth just a small amount (about half a price tick), but because of the large volumes the stakes add up. Our main estimates suggest that races constitute roughly one-third of price impact and the effective spread (key microstructure measures of the cost of liquidity), that latency arbitrage imposes a roughly 0.5 basis point tax on trading, that market designs that eliminate latency arbitrage would reduce the market's cost of liquidity by 17%, and that the total sums at stake are on the order of $5 billion per year in global equity markets alone."
From the introduction:
"At the center of the controversy over speed is a phenomenon called “latency arbitrage”, also known as “sniping” or “picking off” stale quotes. In plain English, a latency arbitrage is an arbitrage opportunity that is sufficiently mechanical and obvious that capturing it is primarily a contest in speed. For example, if the price of the S&P 500 futures contract changes by a large-enough amount in Chicago, there is a race around the world to pick off stale quotes in every asset highly correlated to the S&P 500 index: S&P 500 exchange traded funds, other US equity index futures and ETFs, global equity index futures and ETFs, etc. Many other examples arise from other sets of highly correlated assets: treasury bonds of slightly different durations, or in the cash market versus the futures market; options and the underlying stock; ETFs and their largest component stocks; currency triangles; commodities at different delivery dates; etc. Perhaps the simplest example is if the exact same asset trades in many different venues. For example, in the US stock market, there are 16 different exchanges and 50+ alternative trading venues, all trading the same stocks—so if the price of a stock changes by enough on one venue, there is a race to pick off stale quotes on all the others. These races around the world involve microwave links between market centers, trans-oceanic fiber-optic cables, putting trading algorithms onto hardware as opposed to software, co-location rights and proprietary data feeds from exchanges, real estate adjacent to and even on the rooftops of exchanges, and, perhaps most importantly, high-quality human capital. Just a decade ago, the speed race was commonly measured in milliseconds (thousandths of a second); it is now measured in microseconds (millionths) and even nanoseconds (billionths)."