Showing posts sorted by date for query computerized markets. Sort by relevance Show all posts
Showing posts sorted by date for query computerized markets. Sort by relevance Show all posts

Monday, May 28, 2018

Protecting and Preserving Competition in Matching Markets--Antitrust and the Medical Match (video)

Here's a video of the talk I gave in Chicago in April, at an antitrust conference at the Stigler Center.  I used as my main example the anti-trust lawsuit that was brought in 2002 against the National Resident Matching Program (NRMP) (and all the hospitals that participated in it). I end with some more general discussion of computerized marketplaces. (My talk goes for a bit less than an hour, and the video continues for another ten or fifteen minutes of Q&A)




I intended to begin the talk with a video satire of the medical match, but wasn't able to show it due to technical problems, so I just spoke about it. But here it is for those of you who missed the Harry Potter version of the medical match:

Harry Potter and the Resident Match | ZDoggMD.com

Tuesday, November 8, 2011

Market design in a future of trusted smart markets: paper by Eduardo Azevedo and Eric Budish

The recent NBER conference on market design had a number of remarkable papers. One of them, by Eduardo Azevedo and Eric Budish, seems to me to offer a tantalizing glimpse at what market design might look at some time in the not too distant future when there is a high level of trust in computerized "smart" markets in which a proxy agent reliably acts on your behalf.

At least that's one way to interpret their paper "Strategyproofness in the Large as a Desideratum for Market Design." Among other things, it considers a kind of extension of the revelation principle that would allow a non-strategy-proof mechanism with an attractive Bayesian Nash equilibrium to be converted into a direct mechanism (e.g. one in which agents were asked to reveal their preferences) that would be "strategy proof in the large," (SP-L) i.e. approximately strategy proof in large markets, and strategy proof in the limit. (Another big contribution of their paper is making precise the idea of strategy proofness in the large, which, they argue, may be a desirable criterion when no strategy proof mechanism exists, or when markets are large...the idea is that mechanisms are SP-L but not strategy proof when they allow players' reports to influence prices in ways that vanish in the limit, but mechanisms that aren't even SP-L allow more fundamental manipulations, e.g. they don't give you what you want even when you're a price taker.)

About the revelation principle type mechanisms hey say:
"The construction works as follows. Agents report their types to our mechanism. Our mechanism then calculates the empirical distribution of these types, and then “activates” the Bayes-Nash equilibrium strategy of the original mechanism associated with this empirical. If agents all report their preferences truthfully, this construction will yield the same outcome as the original mechanism in the large-market limit, because the empirical distribution of reported types converges to the underlying true distribution. The subtle part of our construction is what happens if some agents systematically misreport their preferences, e.g., they make mistakes. Suppose the true prior is u , but for some reason the agents other than agent i systematically misreport their preferences, according to distribution m. In a finite market, with sampling error, the empirical distribution of the other agents’ reports is say m^ . As the market grows large, m^ is converging to m, and also i’s influence on the empirical distribution is vanishing. Thus in the limit, our construction will activate the Bayes-Nash equilibrium strategy associated with m. This is the “wrong” prior – but agent i does not care. From his perspective, the other agents are reporting according to m, and then playing the Bayes-Nash equilibrium strategy associated with m, so i too wishes to play the Bayes-Nash equilibrium strategy associated with m. This is exactly what our constructed mechanism does on i’s behalf in the limit. Hence, no matter how the other agents play, i wishes to report his own type truthfully in the limit, i.e., the constructed mechanism is SP-L."

The attraction of such a mechanism of course is that it doesn't depend on the agents reaching a Bayes-Nash equilibrium, which is the problem with mechanisms whose desirability is based on the attractiveness of their equilibrium behavior. Equilibrium may be hard to reach, and such mechanisms may perform badly in practice as a result. But coordination on an equilibrium is much easier when truth telling is a dominant strategy.

The reason this seems like a future mechanism rather than one that is promising for practical application right now is that it is pretty opaque, the opposite of transparent. I can't yet imagine going to e.g. a school district and proposing such a mechanism, which you'd have to sell to parents by saying "tell us your true preferences, and we'll act on your behalf to get you your highest ranked school choice by playing the equilibrium that will arise when we see the choices of all families." The problem is not just that the equilibrium might be hard to describe in the abstract, but that this difficulty is compounded by the fact that assignments will depend in this hard to describe way on an unknown distribution of preferences.

But what might be a tough sell today will be a much easier sell when everyone is accustomed to having their data automatically backed up in the cloud by software that optimizes performance based on things only it observes, and to having their electricity consumption mediated by smart meters that run the air-conditioner in a way that reduces costs based on spot prices, etc.

So...engineering is like that. Just as bridges have gotten longer and stronger over time, there's no reason to think that the market designs of today will be the ones we build in the future. The prospect of confidently putting yourself in the hands of a non-transparent automated expert that you may not understand, a "Martian system" so to speak, may be agreeable to the general public of the future.

(The phrase "Martian system" is one I recall from the early days of expert systems and decision aids. The idea was that you were likely to trust an automated adviser more if you could understand its reasoning, and so judge when its advice was likely to be correct. If you got a non-intuitive answer from an opaque oracle, a "martian system" instead of an expert system, you might worry that the answer was wrong because of wrong inputs or bad construction, and so ignore it. But a transparent system might convince you that a non-intuitive answer was correct, if you were more confident that when it wasn't correct you could tell. But if the martian adviser became so reliable that you could be sure he would not produce an incorrect answer, his opacity might become less of a drawback, since you could rely on him anyway.)

By the way, did I mention that Eduardo is on the job market this year? He's a talented theorist with broad interests who has already made important contributions to matching theory, among other things. Here are his papers. You could hire him.

Saturday, April 30, 2011

Designing "hidden markets"--Sven Seuken

Yesterday Sven Seuken defended his dissertation, which is on the interface of CS and Economics. In particular, he is interested in designing both computerized marketplaces and the user interfaces through which participants will interact.



The essay that was his job market paper concerns a practical business idea for a centrally administered marketplace for peer-to-peer computer backup services that have to be consumed in bundles (e.g. bandwidth and memory are complements), but may be offered in different proportions by different users, at market prices that are posted through a user interface that makes it easy for a consumer to see what backup he requires, and what combinations of resources he can offer to the system to pay for his own services. A customer for the backup service must offer backup services to other customers, and the  centralized server keeps track of what resources are being used, and sets relative prices for different resources that are “hidden” in that they are revealed not as numerical prices, but as tradeoffs between backup capacity a consumer demands and various ways that he can supply the system with resources from his own computer (upload and download bandwidth and memory, and hours a day connected to the web).

That is, this is a market with complements, in which both bids and asks must be for packages of services, but in which customers can participate using a simple interface.

Market design itself is becoming a market with complementarities between economists and computer scientists. Sven may join his main advisor, David Parkes, in internalizing many of these complementarities himself. (The other members of his committee were Eric Horvitz, Yiling Chen, and me.) Since he is going to Zurich, he may also have the opportunity to join forces with Jacob Goeree and solidify a real center of market design there.


Welcome to the club, Sven.

Friday, September 17, 2010

Cleaning up the scramble for medical residents with SOAP

Changes are coming in the scramble that follows the medical resident match run by the National Resident Matching Program (NRMP). A new, semi-centralized, partially computerized Supplemental Offer and Acceptance Program (SOAP) has been announced, for implementation in March 2010.

If I understand the proposal correctly, residency programs with unmatched positions will be able to submit preference lists of match participants who ended up unmatched, and these preferences will be used to make exploding offers, after which the preference lists and positions will be updated to take account of acceptances (e.g. candidates who accept a position will be removed from other program's preference lists), and new exploding offers will be issued. Unlike in the main match (which these days uses the Roth-Peranson algorithm, but which you can think of as a student-proposing deferred acceptance algorithm), applicants will not submit preference lists, but will accept or reject offers as they come in (i.e. they cannot defer acceptances by holding their best offer until they see if any better offers arrive later).

Part of the proposal is to integrate the scramble with the Electronic Residency Application Service (ERAS), which will make some kinds of automatic processing and regulation possible,while some of the proposed regulations may be more challenging to enforce. (Markets that use exploding offers at fixed times have often been subject to cheating of various sorts: here's a paper on the experience of the law clerk market. There are lots of differences in the market culture of doctors and lawyers that may result in different outcomes.)

Here are some of its proposed rules:

• Unmatched applicant and unfilled program information will be released simultaneously.

• There will be a “time out” period during which unmatched applicants can send applications but programs cannot make offers.

Applicants and programs will be required to send and receive applications only through ERAS.

NRMP-participating programs that fill positions during Match Week must do so only through the SOAP.

• New functionality will be added to the R3 System to allow programs to offer unfilled positions on the basis of preference lists submitted by the programs.

Applicants must accept or reject their offer(s) within a specific timeframe; offers not accepted or rejected will expire.

• The R3 System will establish an electronic “handshake” when an applicant accepts a position.

Positions will be deleted from the dynamic List of Unfilled Programs once an offer has been accepted.

A program’s unfilled positions will be offered to applicants in order of preference until all positions are filled or the preference list has been exhausted; programs will be able to add applicants to the bottom of their preference lists throughout Match Week.

• The NRMP Match Participation Agreement will be expanded to include Match Week and SOAP, and sanctions will be imposed for improper behavior.
...
Eligible NRMP applicants:


• Must be able to enter GME on July 1 in the year of the Match

• Will be able to apply only to unfilled Match-participating programs during Match Week

􀀹 Access to the List of Unfilled Programs will be restricted by match status (preliminary or advanced)

􀀹 Must use ERAS and will be able to select only unfilled Match-participating programs

􀀹 Cannot use phone, fax, email, or other methods

􀀹 Cannot have another individual/entity contact programs on applicant’s behalf

􀀹 Will be able to accept positions only through SOAP during Match Week

• Can apply to non-Match-participating programs after Match Week

Ineligible NRMP applicants:

• Cannot participate in SOAP

􀀹 Cannot apply to Match-participating programs using ERAS, phone, fax, email, or other methods

􀀹 Cannot have another individual/entity contact Match-participating programs on applicant’s behalf

• Can apply to non-Match-participating programs during Match Week

􀀹 Can use ERAS to select non-Match-participating programs

􀀹 Can use phone, fax, email, or other methods

• Can apply to Match-participating programs after Match Week

Unfilled Programs:

Must accept applications only through ERAS during Match Week

􀀹 Cannot use phone, fax, email, or personal contacts

• Must fill positions using SOAP during Match Week

􀀹 Cannot offer positions to ineligible applicants during Match Week

􀀹 Cannot make offers outside SOAP during Match Week

􀀹 Are not required to fill positions during Match Week

• Can add applicants to bottom of preference list

If an applicant rejects an offer or allows an offer to expire, no further offers will be made to that applicant by the same program.

Once an applicant accepts an offer, the applicant will not be able to send additional applications via ERAS.

Once a program has filled all of its positions through SOAP, applicants will be unable to send applications to that program via ERAS.

Offers extended by programs and accepted by applicants during the Match Week Supplemental Offer and Acceptance Program (SOAP) will create a binding commitment. Failure to honor that commitment or failure to adhere to SOAP policies will be a violation of the Match Participation Agreement.

Monday, September 13, 2010

NSF "Grand Challenge" white paper on market design

The National Science Foundation has asked for two-thousand-word 'white papers' on challenges worth exploring over the next decade.

Here's mine, on Market Design.

Abstract

In the past fifteen years, the emerging field of Market Design has solved important practical problems, and clarified both what we know and what we don’t yet know about how markets work. The challenge is to understand complex markets well enough to fix them when they’re broken, and implement new markets and market-like mechanisms when needed.

Among markets that economists have helped design are multi-unit auctions for complementary goods such as spectrum licenses; computerized clearinghouses such as the National Resident Matching Program, through which most American doctors get their first jobs; decentralized labor markets such as those for more advanced medical positions and for academic positions; school choice systems; and kidney exchange, which allows patients with incompatible living donors to exchange donor kidneys with other incompatible patient-donor pairs.

These markets differ from markets for simple commodities, in which, once prices have been established, everyone can choose whatever they can afford. Most of these markets are matching markets, in which you can’t just choose what you want, you also have to be chosen. One of the scientific challenges is to learn more about the workings of complex matching markets, such as labor markets for professionals, college admissions, and marriage.

Friday, May 21, 2010

Market orders, programmed trading and loss of thickness

I haven't yet read a convincing account of the one-day stock market crash and rebound on May 6, but here's an early (May 9, NY Times) story that makes a case that a lot of conventional market tools could have interacted to produce a bad outcome: Thursday’s Stock Free Fall May Prompt New Rules.

"The S.E.C., which oversees the nation’s equity markets, requires a suspension in trading only in the event of a broad market collapse, defined as a drop of at least 10 percent in the Dow Jones industrial average, which is based on the share prices of 30 large American companies.
Other countries, like Germany, impose similar circuit breakers on trading in shares of any individual company that has a similar drop, but the S.E.C. has never done so. A former S.E.C. official said the possibility had been discussed in recent years, but “I don’t think there was quite the urgency to deal with it.”
The S.E.C. and the Commodity Futures Trading Commission said in a joint statement on Friday that the issue now had their attention.
“We are scrutinizing the extent to which disparate trading conventions and rules across various markets may have contributed to the spike in volatility,” the statement said. “This is inconsistent with the effective functioning of our capital markets and we will make whatever structural or other changes are needed.”
Early this year, the S.E.C. also began a broad review of equity markets, including whether computerized trading is properly regulated.
The heads of several of the largest electronic exchanges said Friday that they would support industrywide rules for breaking free falls.
But there are other ideas to keeping computerized markets in check. Lawrence E. Harris, a finance professor at the University of Southern California, said regulators should simply require all sellers to specify a minimum price below which they do not want to complete the sale of their shares. Market orders, placed at the best available price, can be too risky in the fast-moving age of electronic trading.
On Thursday, some sellers placed orders that were not fulfilled until prices had plunged as low as a penny a share. If sellers had placed “limit orders” instead, those transactions would not have happened, Professor Harris said.
“Electronic exchanges in most other countries only accept limit orders,” said Professor Harris, a former S.E.C. chief economist. “Without any mechanisms to stop the market, we just had stocks falling through the ice.”
But Rafi Reguer, a spokesman for the electronic exchange Direct Edge, said retail investors liked market orders because limit orders could be rejected, forcing the seller to try again, in some cases at a lower price.
“Sometimes what people value is the certainty of execution,” Mr. Reguer said.
Experts also note that the value of limit orders can be subverted if investors routinely set unrealistically low limits, to avoid the inconvenience of having their orders rejected.
The BATS Exchange, a large electronic exchange based near Kansas City, rejects orders if the price would be more than 5 percent or 50 cents away from the last completed transaction.
During the market panic on Thursday, between 2:40 and 3 p.m., BATS prevented more than 47.6 million orders from executing — more than 95 percent of all orders during that period, according to Randy Williams, a spokesman for the company. "

And here's a May 19 NY Times story on the SEC's new rules: New Rules Would Limit Trades in Volatile Market
"The Securities and Exchange Commission said Tuesday that it would temporarily institute circuit breakers on all the stocks in the Standard & Poor’s 500-stock index after the huge market gyrations on May 6.
The circuit breakers will pause trading in those stocks for five minutes if the price moves by 10 percent or more in a five-minute period. The trial run will begin after a 10-day comment period and will last through Dec. 10, the commission said. The circuit breakers will apply both to rising and falling stock prices.
But in a separate report, the S.E.C. and the Commodity Futures Trading Commission said that they had not been able to pinpoint the cause of the sharp market decline that shook investors and markets two weeks ago.
Generally, the agencies said, the drop was caused by traders stepping back from the market and refusing to buy or sell, in both the stock and futures markets. The government found that there was also a heavy reliance by investors on automated orders to sell at the market price once stock prices had declined by a certain amount. Further, there were different rules on different exchanges about when trading is automatically slowed or stopped. "

Friday, January 15, 2010

Ken Rogoff on grandmasters and growth

My colleague Ken Rogoff writes about two subjects he knows as well as anyone: Grandmasters and Global Growth.

Drawing analogies from the great progress in chess playing computer programs, he conjectures that artificial intelligence will power a lot of economic growth in the coming decade. I'm a bit skeptical, if only because AI has been the coming thing for at least a few decades now (remember expert systems?).

Of course, this may be a quibble about the "I" in AI. Herb Simon used to complain that the goal posts were constantly being moved; whenever computers became good at something that used to be thought to require intelligence, then "intelligence" would just be redefined. In this regard, I've been impressed at the big strides that have recently been taken in cheap, fast computerized translation: it's still a long way from passing the Turing test, but you can now tell at least what a web page is about in a lot of languages.

We're already seeing a lot of growth of computer assisted markets of all sorts, including many clearinghouses of the kind I often write about when I write about market design generally, including some of the developments in this past year. So maybe Ken is right, and after we enjoy the economic growth, we can quibble about whether these computerized markets and products are really smart...

Update: the February 11, 2010 New York Review of Books has an article by Gary Kasparov, The Chess Master and the Computer that includes a discussion of human-computer teams, i.e. of computer-assisted chess.