The recent NBER conference on market design had a number of remarkable papers. One of them, by Eduardo Azevedo and Eric Budish, seems to me to offer a tantalizing glimpse at what market design might look at some time in the not too distant future when there is a high level of trust in computerized "smart" markets in which a proxy agent reliably acts on your behalf.
At least that's one way to interpret their paper "Strategyproofness in the Large as a Desideratum for Market Design." Among other things, it considers a kind of extension of the revelation principle that would allow a non-strategy-proof mechanism with an attractive Bayesian Nash equilibrium to be converted into a direct mechanism (e.g. one in which agents were asked to reveal their preferences) that would be "strategy proof in the large," (SP-L) i.e. approximately strategy proof in large markets, and strategy proof in the limit. (Another big contribution of their paper is making precise the idea of strategy proofness in the large, which, they argue, may be a desirable criterion when no strategy proof mechanism exists, or when markets are large...the idea is that mechanisms are SP-L but not strategy proof when they allow players' reports to influence prices in ways that vanish in the limit, but mechanisms that aren't even SP-L allow more fundamental manipulations, e.g. they don't give you what you want even when you're a price taker.)
About the revelation principle type mechanisms hey say:
"The construction works as follows. Agents report their types to our mechanism. Our mechanism then calculates the empirical distribution of these types, and then “activates” the Bayes-Nash equilibrium strategy of the original mechanism associated with this empirical. If agents all report their preferences truthfully, this construction will yield the same outcome as the original mechanism in the large-market limit, because the empirical distribution of reported types converges to the underlying true distribution. The subtle part of our construction is what happens if some agents systematically misreport their preferences, e.g., they make mistakes. Suppose the true prior is u , but for some reason the agents other than agent i systematically misreport their preferences, according to distribution m. In a finite market, with sampling error, the empirical distribution of the other agents’ reports is say m^ . As the market grows large, m^ is converging to m, and also i’s influence on the empirical distribution is vanishing. Thus in the limit, our construction will activate the Bayes-Nash equilibrium strategy associated with m. This is the “wrong” prior – but agent i does not care. From his perspective, the other agents are reporting according to m, and then playing the Bayes-Nash equilibrium strategy associated with m, so i too wishes to play the Bayes-Nash equilibrium strategy associated with m. This is exactly what our constructed mechanism does on i’s behalf in the limit. Hence, no matter how the other agents play, i wishes to report his own type truthfully in the limit, i.e., the constructed mechanism is SP-L."
The attraction of such a mechanism of course is that it doesn't depend on the agents reaching a Bayes-Nash equilibrium, which is the problem with mechanisms whose desirability is based on the attractiveness of their equilibrium behavior. Equilibrium may be hard to reach, and such mechanisms may perform badly in practice as a result. But coordination on an equilibrium is much easier when truth telling is a dominant strategy.
The reason this seems like a future mechanism rather than one that is promising for practical application right now is that it is pretty opaque, the opposite of transparent. I can't yet imagine going to e.g. a school district and proposing such a mechanism, which you'd have to sell to parents by saying "tell us your true preferences, and we'll act on your behalf to get you your highest ranked school choice by playing the equilibrium that will arise when we see the choices of all families." The problem is not just that the equilibrium might be hard to describe in the abstract, but that this difficulty is compounded by the fact that assignments will depend in this hard to describe way on an unknown distribution of preferences.
But what might be a tough sell today will be a much easier sell when everyone is accustomed to having their data automatically backed up in the cloud by software that optimizes performance based on things only it observes, and to having their electricity consumption mediated by smart meters that run the air-conditioner in a way that reduces costs based on spot prices, etc.
So...engineering is like that. Just as bridges have gotten longer and stronger over time, there's no reason to think that the market designs of today will be the ones we build in the future. The prospect of confidently putting yourself in the hands of a non-transparent automated expert that you may not understand, a "Martian system" so to speak, may be agreeable to the general public of the future.
(The phrase "Martian system" is one I recall from the early days of expert systems and decision aids. The idea was that you were likely to trust an automated adviser more if you could understand its reasoning, and so judge when its advice was likely to be correct. If you got a non-intuitive answer from an opaque oracle, a "martian system" instead of an expert system, you might worry that the answer was wrong because of wrong inputs or bad construction, and so ignore it. But a transparent system might convince you that a non-intuitive answer was correct, if you were more confident that when it wasn't correct you could tell. But if the martian adviser became so reliable that you could be sure he would not produce an incorrect answer, his opacity might become less of a drawback, since you could rely on him anyway.)
By the way, did I mention that Eduardo is on the job market this year? He's a talented theorist with broad interests who has already made important contributions to matching theory, among other things. Here are his papers. You could hire him.
Tuesday, November 8, 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment