Showing posts with label experiments. Show all posts
Showing posts with label experiments. Show all posts

Sunday, August 8, 2021

Stanford SITE Seminar: Psychology and Economics, Aug 9-10

 


Date
 - 
ORGANIZED BY
  • B. Douglas Bernheim, Stanford University
  • John Beshears, Harvard Business School
  • Vincent Crawford, University of Oxford and University of California, San Diego
  • David Laibson, Harvard University
  • Ulrike Malmendier, University of California, Berkeley

As we have done for many years, this workshop brings together researchers working on issues at the intersection of psychology and economics. The segment will focus on evidence of and explanations for non-standard choice patterns, as well as the positive and normative implications of those patterns in a wide range of economic decision-making contexts, such as lifecycle consumption and savings, workplace productivity, health, and prosocial behavior. The presentations will frequently build upon insights from other disciplines, including psychology and sociology. Theoretical, empirical, and experimental studies will be included.

In This Session

Monday, August 9, 2021

AUG 9
9:00 AM - 9:30 AM

The Gender Gap in Self-Promotion

Presented by: Christine Exley (Harvard Business School)
Co-author(s): Judd B. Kessler (The Wharton School, University of Pennsylvania)

In applications, interviews, performance reviews, and many other environments, individuals subjectively describe their ability and performance to others. We run a series of experiments, involving over 4,000 participants from online labor markets and over 10,000 school-aged youth. We find a large gender gap in self-promotion: Women subjectively describe their ability and performance to potential employers less favorably than equally performing men. Even when all incentives to promote are removed, however, the gender gap remains. The gender gap in self-promotion is reflective of an underlying gender gap in how individuals subjectively evaluate their own performance. This underlying gender gap proves persistent and arises as early as the sixth grade.

AUG 9
9:30 AM - 10:00 AM

Partial Equilibrium Thinking in General Equilibrium

Presented by: Francesca Bastianello (Harvard University)
Co-author(s): Paul Fontanier (Harvard University)

We develop a theory of “Partial Equilibrium Thinking” (PET), whereby agents fail to understand the general equilibrium consequences of their actions when inferring information from endogenous outcomes. PET generates a two-way feedback effect between outcomes and beliefs, which can lead to arbitrarily large deviations from fundamentals. In financial markets, PET equilibrium outcomes exhibit over-reaction, excess volatility, high trading volume, and return predictability. We extend our model to allow for rationality of higher-order beliefs, general forms of model misspecification, and heterogenous agents. We show that more sophisticated agents may contribute to greater departures from rationality. We also draw a distinction between models of misinference and models with biases in Bayesian updating, and study how these two departures from rationality interact. Misinference from mistakenly assuming the world is rational amplifies biases in Bayesian updating.

AUG 9
10:00 AM - 10:15 AM

Break

AUG 9
10:15 AM - 10:45 AM

Belief-Updating: Inference versus Extrapolation

Presented by: Tony Q. Fan (Stanford University),
Co-author(s): Yucheng Liang (Carnegie Mellon University) and Cameron Peng (London School of Economics and Political Science)

Survey forecasts of macroeconomic and financial variables show widespread overreaction to news, but laboratory experiments on belief updating typically find underinference from signals. We provide new experimental evidence to connect these two seemingly inconsistent phenomena. Building on a classic experimental paradigm, we study how people make inferences and revise forecasts in the same fully-specified information environment. Subjects underreact to signals when inferring about fundamental states (“underinference”), but overreact to signals when revising forecasts about future outcomes (“overextrapolation”). In the latter task, subjects appear to be using a mix of simplifying heuristics, such as focusing on the representative state (the state most consistent with the signal) and anchoring on the signal. Additional treatments link our results to the difficulty of recognizing the conceptual connection between inference and forecast revision problems.

AUG 9
10:45 AM - 11:15 AM

Learning in the Household

Presented by: Gautam Rao (Harvard University)
Co-author(s): John J. Conlon (Harvard University), Malavika Mani (Columbia University), Matthew Ridley (MIT), and Frank Schilbach (MIT)

This paper studies social learning and information pooling within the household using a lab experiment with 400 married couples in Chennai, India. Participants are asked to guess the fraction of red balls in an urn after each spouse privately receives draws from the urn and then has a chance to learn their spouse’s draws through a face-to-face discussion. Guesses are paid for accuracy and the payoff is split equally between the spouses, aligning their incentives. We find that husbands’ beliefs respond less than half as much to information that was collected by their wives, relative to ‘own’ information. This failure of learning is not due to communication frictions: when we directly share their wife’s information with husbands, they continue to under-weight it relative to their own draws. Wives do not display this behavior, and instead equally weight their own and their spouse’s information. In a follow-up experiment with pairs of strangers, individuals of both genders put more weight on their own information than on their partner’s. We conclude that people have a general tendency to under-weight others’ information relative to their own, and speculate that a norm of wives deferring to their husbands may play a countervailing role in our context.

AUG 9
11:15 AM - 11:30 AM

Break

AUG 9
11:30 AM - 12:00 PM

Does Saving Cause Borrowing?

Presented by: Michaela Pagel (Columbia GSB)
Co-author(s): Paolina Medina (Mays Business School of Texas A&M University)

We study whether or not nudging individuals to save more has the unintended consequence of additional borrowing in high-interest unsecured consumer credit. We analyze the effects of a large-scale experiment in which 3.1 million bank customers were nudged to save more via (bi-)weekly SMS and ATM messages. Using Machine Learning methods for causal inference, we build a score to sort individuals according to their predicted treatment effect. We then focus on the individuals in the top quartile of the distribution of predicted treatment effects who have a credit card and were paying interest at baseline. Relative to their control, this group increased their savings by 5.7% on average or 61.84 USD per month. At the same time, we can rule out increases in credit card interest larger than 1.25 USD with 95% statistical confidence. We thus estimate that for every additional dollar of savings, individuals incur less than 2 cents in additional borrowing cost. This is a direct test test of the predictions of rational co-holding models, and is an important result to evaluate policy proposals to increase savings via nudges or more forceful measures.

AUG 9
12:00 PM - 12:30 PM

Dynamic Preference "Reversals" and Time Inconsistency

Presented by: Dmitry Taubinsky (UC Berkeley)
Co-author(s): Philipp Strack (Yale University)

We study identification of time preferences from data sets where an agent at time 0 makes an advance commitment, and later at time 1 can revise their choice. A common intuition, motivating many empirical studies, is that systematic reversals toward certain alternatives imply time inconsistency. We show that this intuition is generally incorrect in environments with random taste shocks. Roughly speaking, the only data that rejects time-consistent expected utility maximization is when a time-0 choice is revealed to be strictly dominated at time 1 with probability 1. This result applies to rich choice sets; to cases where the analyst observes the complete ranking of alternatives in every period and state of the world; to environments where it is natural to impose additional assumptions like concavity; and to cases where the analyst has access to supplementary cardinal information. However, we prove that there is a class of empirical designs that does produce robust point identification of the degree of time inconsistency: designs that estimate agents’ willingness to pay for different alternatives at both time 0 and time 1, and where the marginal utility of money can be assumed to not vary with agents’ time-1 preferences for the different alternatives.

Tuesday, August 10, 2021

AUG 10
9:00 AM - 9:30 AM

Safe Spaces: Shelters or Tribes?

Presented by: Jean Tirole (Toulouse School of Economics)
AUG 10
9:30 AM - 10:00 AM

A Model of Justification

Presented by: Sarah Ridout (Harvard University)

I model decision-making constrained by morality, rationality, or other virtues. In addition to a primary preference over outcomes, the decision maker (DM) is characterized by a set of preferences that he considers justifiable. In each choice setting, he maximizes his primary preference over the subset of alternatives that maximize at least one of the justifiable preferences. The justification model unites a broad class of empirical work on distributional preferences, charitable donations, prejudice/discrimination, and corruption/bribery. I provide full behavioral characterizations of several variants of the justification model as well as practical tools for identifying primary preferences and justifications from choice behavior. I show that identification is partial in general, but full identification can be achieved by including lotteries in the domain and allowing for heterogeneity in both primary preferences and justifications. Since the heterogeneous model uses between-subject data, it is robust to consistency motives that may arise in within-subject experiments. I extend the heterogeneous model to information choice and show that it accounts for observed patterns of information demand and avoidance on ethical domains.

AUG 10
10:00 AM - 10:15 AM

Break

AUG 10
10:15 AM - 10:45 AM

How Flexible is that Functional Form? Measuring the Restrictiveness of Theories

Presented by: Annie Liang (Northwestern University)
Co-author(s): Drew Fudenberg (MIT) and Wayne Gao (University of Pennsylvania)

We propose a new way to quantify the restrictiveness of an economic model, based on how well the model fits simulated, hypothetical data sets. The data sets are drawn at random from a distribution that satisfies some application-dependent content restrictions (such as that people prefer more money to less). Models that can fit almost all hypothetical data well are not restrictive. To illustrate our approach, we evaluate the restrictiveness of popular behavioral models in two experimental settings—certainty equivalents and initial play— and explain how restrictiveness reveals new insights about each of the models.

AUG 10
10:45 AM - 11:15 AM

Choice and Complexity

Presented by: Jörg L. Spenkuch (Northwestern University)
Co-author(s): Yuval Salant (Northwestern University)

We study two dimensions of complexity that may affect individual decision-making. The first one is object complexity, which corresponds to the difficulty of evaluating any given object in the choice set. The second dimension is composition complexity, which refers to the difficulty of finding the best among similar alternatives. We develop a satisficing-with-evaluation-errors model that incorporates both dimensions and delivers sharp empirical predictions about their effect on choice behavior. We test these predictions in a novel data set with information on hundreds of millions of decisions in chess endgames. Chess endgames admit an objective measure of choice quality and, most importantly, have ample variation in object and composition complexity. Consistent with the theory, we document that even highly experienced decision makers are significantly more likely to make suboptimal choices as complexity increases along either dimension. Our analysis, therefore, helps to shed some of the first light on the role of complexity in decision-making outside of the laboratory.

AUG 10
11:15 AM - 11:30 AM

Break

AUG 10
11:30 AM - 12:00 PM

Incentive Complexity, Bounded Rationality, and Effort Provision

Presented by: David Huffman (University of Pittsburgh)
Co-author(s): Johannes Abeler (University of Oxford) and Collin Raymond (Purdue University)

This paper shows that dynamic incentives embedded in an organization’s workplace incentive scheme can be a shrouded attribute, due to contract complexity and worker bounded rationality. This is true in field experiments within the firm, and in complementary online experiments with real eort tasks. Structural estimates indicate that rational agents who fully understand the incentive scheme would behave sigificantly dierent from what we observe. A response to dynamic incentives does emerge when we reduce complexity or look at workers with higher cognitive ability. The results illustrate the potential value of complexity to organizations, they demonstrate that complex incentive contracts may allow firms to be achieve better than second-best, they identify specific features of contracts that can influence the eectiveness of incentives through the channel of complexity, and they imply heterogeneous eects of incentives depending on worker cognitive ability.

AUG 10
12:00 PM - 12:30 PM

The Negative Consequences of Loss-Framed Performance Incentives

Presented by: Alex Rees-Jones (The Wharton School, University of Pennsylvania)
Co-author(s): Lamar Pierce (Olin Business School, Washington University in St Louis) and Charlotte Blank (Maritz)

Behavioral economists have proposed that incentive contracts result in higher productivity when bonuses are "loss framed" prepaid then clawed back if targets are unmet. We test this claim in a large-scale field experiment. Holding financial incentives fixed, we randomized the pre- or postpayment of sales bonuses at 294 car dealerships. Prepayment was estimated to reduce sales by 5%, generating a revenue loss of $45 million over 4 months. We document, both empirically and theoretically, that negative effects of loss framing can arise due to an increase in incentives for "gaming" behaviors. Based on these claims, we reassess the common wisdom regarding the desirability of loss framing.

Friday, July 30, 2021

The Art of Experimental Economics: Twenty Top Papers Reviewed , edited by Gary Charness and Mark Pingle

 Here's a forthcoming book that provides a unique and entertaining review of twenty influential papers in experimental economics. Katie Coffman and I wrote the review of the classic 2007 paper on gender and competition by Niederle and Vesterlund.

The Art of Experimental Economics: Twenty Top Papers Reviewed  Edited By Gary Charness, Mark Pingle  Forthcoming by Routledge

Item will ship after August 27, 2021  ISBN 9780367894306

Chapter 1: Introducing 20 Top Papers and their Reviewers  Gary Charness and Mark Pingle

Chapter 2: An Experimental Study of Competitive Market Behavior (by Vernon L. Smith)  Reviewed by Charles A. Holt

Chapter 3: The Strategy Method as an Instrument for the Exploration of Limited Rationality in Oligopoly Game Behavior (by Reinhard Selten)  Reviewed by Claudia Keser and Hartmut Kliemt

Chapter 4: An Experimental Analysis of Ultimatum Bargaining (by Werner Güth, Rolf Schmittberger and Bernd Schwarze)  Reviewed by Brit Grosskopf and Rosemarie Nagel

Chapter 5: The Winner’s Curse and Public Information in Common Value Auctions (by John H. Kagel and Dan Levin)  Reviewed by Gary Charness

Chapter 6: Group Size Effects in Public Goods Provision: The Voluntary Contributions Mechanism (by R. Mark Isaac and James M. Walker)  Reviewed by James Andreoni

Chapter 7:  Rational Expectations and the Aggregation of Diverse Information in Laboratory Security Markets (by Charles R. Plott and Shyam Sunder)    Reviewed by R. Mark Isaac

Chapter 8: Experimental Tests of the Endowment Effect and the Coase Theorem (by Daniel Kahneman, Jack L. Knetsch, Richard H. Thaler)   Reviewed by John A. List

Chapter 9: Bargaining and Market Behavior in Jerusalem, Ljubljana, Pittsburgh and Tokyo: An Experimental Study (by Alvin E. Roth, Vesna Prasnikar, Masahiro Okuno-Fujiwara and Shmuel Zamir) Reviewed by Armin Falk

Chapter 10: Unraveling in Guessing Games: An Experimental Study (by Rosemarie Nagel)  Reviewed by John H. Kagel and Antonio Penta

Chapter 11: Trust, Reciprocity, and Social History (by Joyce Berg, John Dickhaut, and Kevin McCabe) Reviewed by Vernon L. Smith

Chapter 12: Cooperation and Punishment in Public Goods Experiments (by Ernst Fehr and Simon Gachter)  Reviewed by Yan Chen

Chapter 13: A Fine is a Price (by Uri Gneezy and Aldo Rustichini)  Reviewed by Alex Imas

Chapter 14: Giving according to GARP: An Experimental Test of the Consistency of Preferences for Altruism (by James Andreoni and John Miller)  Reviewed by Catherine Eckel

Chapter 15: Risk Aversion and Incentive Effects (by Charles Holt and Susan Laury)  Reviewed by Kevin McCabe

Chapter 16: Does market experience eliminate market anomalies? (by John A. List) Reviewed by Matthias Sutter

Chapter 17: Promises and Partnership (by Gary Charness and Martin Dufwenberg)  Reviewed by Urs Fischbacher and Franziska Föllmi-Heusi

Chapter 18: The Hidden Costs of Control (by Armin Falk and Michael Kosfeld)  Reviewed by Laura Razzolini and Rachel Croson

Chapter 19: Do Women Shy Away from Competition? Do Men Compete Too Much? (by Muriel Niederle and Lise Vesterlund)  Reviewed by Katherine B. Coffman and Alvin E. Roth 

Chapter 20: Group Identity and Social Preferences (by Yan Chen and Sherry X. Li)  Reviewed by Marie Claire Villeval 

Chapter 21: Lies in Disguise—An Experimental Study on Cheating (by Urs Fischbacher and Franziska Föllmi-Heusi)  Reviewed by Uri Gneezy and Marta Serra-Garcia

***************

Here's my Foreword to the book:

 Twenty carefully chosen papers in experimental economics, reviewed and put in context by veteran experimenters, provide an excellent, close-up introduction to the richness and diversity of the field, and where it is coming from. These twenty papers appeared over a period of half a century, from 1962 to 2013, during which economic experiments went from being quite rare to taking their place among the standard tools of economics.

When John Kagel and I edited the Handbook of Experimental Economics, volumes 1 and 2 (1995 and 2016), we encouraged the chapter authors not to try to tell readers how to do good experiments, but to show them. And so it is with these papers: there are lots of ways to do good experiments, and here is a collection of twenty that have been influential. The reviews make clear that a successful, influential experiment is part of a scientific conversation that began well before the experiment was designed and conducted, and continued well after it was published and replicated. These conversations aren’t only among experimenters, nor are they only among economists: experiments add to scientific conversations of all sorts, answering some questions and raising others, often questions that couldn’t even be posed with equal precision in naturally occurring environments.   

Reader, beware. After reading this volume, you will want to read more, and, your curiosity aroused, may find yourself on the slippery slope of designing and conducting your own experiments.

Alvin E. Roth, Stanford University, December 2020.

Bibliography:

Kagel, J.H. and A.E. Roth (editors) Handbook of Experimental Economics, Princeton University Press, 1995.

Kagel, J.H. and A.E. Roth (editors) Handbook of Experimental Economics, Volume 2, Princeton University Press, 2016.





Sunday, July 18, 2021

Experiments touching on market design in the July AER

 The July AER has a number of experiments that speak to market design:

 Online,

How to Avoid Black Markets for Appointments with Online Booking Systems  By Rustamdjan Hakimov, C.-Philipp Heller, Dorothea Kübler, and Morimitsu Kurino*

Abstract: Allocating appointment slots is presented as a new application for market design. Online booking systems are commonly used by public authorities to allocate appointments for visa interviews, driver’s licenses, passport renewals, etc. We document that black markets for appointments have developed in many parts of the world. Scalpers book the appointments that are offered for free and sell the slots to appointment seekers. We model the existing first-come-first-served booking system and propose an alternative batch system. The batch system collects applications for slots over a certain time period and then randomly allocates slots to applicants. The theory predicts and lab experiments confirm that scalpers profitably book and sell slots under the current system with sufficiently high demand, but that they are not active in the proposed batch system. We discuss practical issues for the implementation of the batch system and its applicability to other markets with scalping.

***********

In rural Malawi,

Pay Me Later: Savings Constraints and the Demand for Deferred Payments  By Lasse Brune, Eric Chyn, and Jason Kerwin*

Abrstract: We study a simple savings scheme that allows workers to defer receipt of part of their wages for three months at zero interest. The scheme significantly increases savings during the deferral period, leading to higher postdisbursement spending on lumpy goods. Two years later, after two additional rounds of the savings scheme, we find that treated workers have made permanent improvements to their homes. The popularity of the scheme implies a lack of good alternative savings options. The results of a follow-up experiment suggest that demand for the scheme is partly due to its ability to address self-control issues.

************

In Rwanda,

Recruitment, Effort, and Retention Effects of Performance Contracts for Civil Servants: Experimental Evidence from Rwandan Primary Schools  by Clare Leaver, Owen Ozier, Pieter Serneels and Andrew Zeitlin

Abstract: This paper reports on a two-tiered experiment designed to separately identify the selection and effort margins of pay for performance (P4P). At the recruitment stage, teacher labor markets were randomly assigned to a "pay-for-percentile" or fixed-wage contract. Once recruits were placed, an unexpected, incentive-compatible, school-level re-randomization was performed so that some teachers who applied for a fixed-wage contract ended up being paid by P4P, and vice versa. By the second year of the study, the within-year effort effect of P4P was 0.16 standard deviations of pupil learning, with the total effect rising to 0.20 standard deviations after allowing for selection. 

*************

and in India,

On Her Own Account: How Strengthening Women's Financial Control Impacts Labor Supply and Gender Norms  By Erica Field, Rohini Pande, Natalia Rigol, Simone Schaner and Charity Troyer Moore

Abstract: Can increasing control over earnings incentivize a woman to work, and thereby influence norms around gender roles? We randomly varied whether rural Indian women received bank accounts, training in account use, and direct deposit of public sector wages into their own (versus husbands') accounts. Relative to the accounts only group, women who also received direct deposit and training worked more in public and private sector jobs. The private sector result suggests gender norms initially constrained female employment. Three years later, direct deposit and training broadly liberalized women's own work-related norms, and shifted perceptions of community norms. 

Tuesday, July 13, 2021

Workshop on Behavioral Game Theory, University of East Anglia, July 15-16

 Here's the announcement:

"Our annual Workshop on Behavioral Game Theory offers leading researchers the opportunity to present current experimental research related to the topic of game theory: the study of strategic interaction using methods of game theory, experimental economics and psychology.  

"The workshop will last for two days on 15 and 16 July. There will be twenty invited speakers along with three plenary talks (see further information below).  

...

"Speakers:

Daniel Friedman, UC Santa Cruz; Yaroslav Rosokha, Purdue University; Heinrich Nax, University of Zurich; Frank Heinemann, TU Berlin; Friederike Mengel, University of Essex; Andis Sofianos, University of Heidelberg; David Gill, Purdue University; Tridib Sharma, ITAM; Alexander Brown, Texas A&M University; Tatiana Kornienko, University of Edinburgh; Xiaomin Li, Caltech; Emanuel Vespa, University of California, San Diego; Evan Calford, Australian National University; Chiara Aina, University of Zurich; Yuval Heller, Bar-Ilan University; Alistair Wilson, University of Pittsburgh; Andy Brownback, University of Arkansas; Manuel Munoz-Herrera, NYU Abu Dhabi; David J Cooper, Florida State University & University of East Anglia; Filippo Massari, University of East Anglia; 

Plenary Speakers: Cristina Bicchieri, University of Pennsylvania; Yan Chen, University of Michigan; Jörgen Weibull, Stockholm School of Economics

Friday, June 18, 2021

An Organ Donation Clarification Act is introduced again

 Here's a new bill introduced into Congress by Matt Cartwright:

To amend the National Organ Transplant Act to clarify the definition of valuable consideration, to clarify that pilot programs that honor and promote organ donation do not violate that Act, and for other purposes. 

The background material it refers to (cost savings, Israeli law, etc.) will be familiar to readers of this blog. (See my past posts about Congressman Cartwright's previous efforts.)  

Here's a press release from co-sponsor Joe Wilson:

CONGRESSMEN WILSON, CARTWRIGHT INTRODUCE BIPARTISAN LEGISLATION TO BOOST ORGAN DONATION  Jun 8, 2021 Press Release

"On May 28, Congressmen Joe Wilson (R-SC-02) and Matt Cartwright (D-PA-08) re-introduced the Organ Donation Clarification Act, a bill with Democratic and Republican support to reduce barriers to organ donation and allow for a pilot program to test the effectiveness of non-cash incentives to increase the supply of organs for transplantation.

...

"The Organ Donation Clarification Act would:

Clarify that certain reimbursements are not valuable consideration but are reimbursements for expenses a donor incurs; and

Allow government-run pilot programs to test the effect of providing non-cash benefits to promote organ donation.

This bipartisan legislation is endorsed by the American Medical Association, Americans for Tax Reform, American Transplant Foundation, Foundation for Kidney Transplant Research, National Kidney Donation Organization and Wait List Zero.

Additional co-sponsors in the U.S. House are Reps. Steve Cohen (D-TN-09), Brian Fitzpatrick (R-PA-01), Mike Kelly (R-PA-16), David McKinley (R-WV-01), Del. Michael San Nicolas (D-Guam-At Large), Jan Schakowsky (D-IL-09), Adam Schiff (D-CA-28), Jackie Speier (D-CA-14), William Timmons (R-SC-04) and Don Young (R-AK-At Large)"


Thursday, April 15, 2021

Limiting job applications in an online labor market: by John Horton and Shoshana Vasserman

 Here's an experiment that involved limiting the number of applications to particular jobs in an online labor market, in which many applicants are likely close substitutes.

Job-Seekers Send Too Many Applications: Experimental Evidence and a Partial Solution by John J. Horton and Shoshana Vasserman

Abstract: As job-seekers internalize neither the full benefits or costs of their application decisions, job openings do not necessarily obtain the socially efficient number of applications. Using a field experiment conducted in an online labor market, we find that some job openings receive far too many applications, but that a simple intervention can improve the situation. A treated group of job openings faced a soft cap on applicant counts. However, employers could easily opt out by literally clicking a single button. This tiny imposed cost on the demand side had large effects on the supply side, reducing the number of applicants to treated jobs by 11%—with even larger reductions in jobs where additional applicants were likely to be inframarginal. This reduction in applicant counts had no discernible effect on the probability a hire was made, or in the quality of the subsequent match. This kind of intervention is easy to implement by any online marketplace or job board and has attractive properties, saving job-seekers effort while still allowing employers with high marginal returns to more applicants to get them.

"In this paper, we describe an experiment conducted in an online labor market that influenced the size of applicant pools faced by employers.1 This was done by imposing a soft cap on the number of applicants that a job opening could receive, as well as limiting the duration of the window of time during which applications could be received: when a job opening received 50 applicants—or when 120 hours (5 days) had passed—no more applicants could apply unless the employer explicitly asked for more applicants. The intent of the intervention was to prevent job-seekers from applying to jobs where their application was likely to either be ignored or simply displace some other applicant, without preventing employers with high marginal returns to more applicants from obtaining them.

...

There is no evidence that better or worse matches were made in the treatment group, as measured by the feedback given by the employer at the end of the contract or in hours-worked. If anything, employer satisfaction rose slightly in the treatment.

The lack of effects on hiring or match quality is seemingly surprising, but likely reflects the fact that price competition among workers “prices in” vertical differences among workers, leaving firms close to indifferent over applicants, as in Romer (1992). Because of this indifference, substitution among applicants is not very costly to employers.

...

"only about 7% of employers requested more applicants by pushing the button.

"The treatment intervention likely saved job-seekers substantial time—more so than the percentage changes in job post applicant counts would seemingly imply. To see why the treatment has out-sized effects on job seekers, note that although relatively few job openings were affected by the 50 applicant cap (about 10%), these job openings are disproportionately important to job-seekers, as they attracted 43% of applications. This difference simply reflects the fact that a randomly selected application is more likely to be sent to a job with a high applicant count.

Wednesday, March 24, 2021

Negotiating climate change, by Schmidt and Ockenfels in PNAS

 Now that there's light at the end of the Covid tunnel, we can turn our attention back to big problems:

Focusing climate negotiations on a uniform common commitment can promote cooperation  by Klaus M. Schmidt and  Axel Ockenfels, PNAS March 16, 2021 118 (11) e2013070118; https://doi.org/10.1073/pnas.2013070118

Edited by Lise Vesterlund, University of Pittsburgh, and accepted by Editorial Board Member Paul R. Milgrom

Abstract: International cooperation on the reduction of greenhouse gas emissions, disarmament, or free trade needs to be negotiated. The success of such negotiations depends on how they are designed. In the context of international climate change policy, it has been proposed [e.g., M. L. Weitzman J. Assoc. Environ. Resour. Econ. 1, 29–49 (2014)] that shifting the negotiation focus to a uniform common commitment (such as a uniform minimum carbon price) would lead to more ambitious cooperation. Yet, a proof-of-concept for this important claim is lacking. Based on game theoretical analyses, we present experimental evidence that strongly supports this conjecture. In our study, human subjects negotiate contributions to a public good. Subjects differ in their benefits and costs of cooperation. Participation in the negotiations and all commitments are voluntary. We consider treatments in which agreements are enforceable, and treatments in which they have to be self-enforcing. In both situations, negotiating a uniform common commitment is more successful in promoting cooperation than negotiating individual commitments (as in the Paris Agreement) and complex common commitments that tailor the commitment to the specific situation of each party (as attempted with the Kyoto Protocol). Furthermore, as suggested by our model, a uniform common commitment benefits most from being enforced.

Sunday, February 28, 2021

What Motivates Paternalism? By Ambuehl, ,Bernheim and Ockenfels in the AER

 I have long been interested in repugnant transactions, which some people would like to engage in and others, not themselves involved in the transaction ('third parties') think should be forbidden.  That's a big class of phenomena (even when we exclude transactions that third parties object to because they might suffer negative externalities). In some cases (e.g. opposition to same sex marriage) there seems to be a lack of empathy with those who want or need to transact in ways that third parties object to. In other cases (e.g. opposition to surrogacy) there often seems to be a desire to protect vulnerable parties  (e.g. potential surrogate mothers) from entering into a transaction that the objecting third parties believe would harm the surrogates.  This latter kind of objection often falls under the label "paternalism."

Here's a paper in the latest AER that explores and finds paternalism in the lab.

What Motivates Paternalism? An Experimental Study By Sandro Ambuehl, B. Douglas Bernheim, and Axel Ockenfels, American Economic Review  March 2021, 111(3): 787–830, https://doi.org/10.1257/aer.20191039

Abstract: "We study experimentally when, why, and how people intervene in others' choices. Choice Architects (CAs) construct opportunity sets containing bundles of time-indexed payments for Choosers. CAs frequently prevent impatient choices despite opportunities to provide advice, believing Choosers benefit. They violate common behavioral welfare criteria by removing impatient options even when all pay-offs are delayed. CAs intervene not by removing options they wish they could resist when choosing for themselves (mistakes-projective paternalism), but rather as if they seek to align others' choices with their own aspirations (ideals-projective paternalism). Laboratory choices predict subjects' support for actual paternalistic policies. "

Monday, January 4, 2021

Randomized control trials plus preferences: a market design for experiments by Yusuke Narita in PNAS

 Random assignment of patients to experimental treatments is intended to allow statisticians to cleanly measure the effect of the treatments. But if there is evidence that some patients might profit more from some treatment than others, fully random assignment may not maximize health outcomes. And if patients have preferences (e.g. for the risk of receiving a problematic kidney for transplant versus the risk of waiting for a better one), then fully random assignment may not maximize welfare.  Yusuke Narita thinks about how to design RCTs that elicit patient preferences and take account of prior's about outcomes, while still allowing the necessary statistical tests to determine treatment effects.

Incorporating ethics and welfare into randomized experiments  by Yusuke Narita

PNAS January 5, 2021 118 (1) e2008740118; https://doi.org/10.1073/pnas.2008740118

Edited by Parag Pathak, Massachusetts Institute of Technology, Cambridge, MA, and accepted by Editorial Board Member Paul R. Milgrom September 30, 2020 

"Abstract: Randomized controlled trials (RCTs) enroll hundreds of millions of subjects and involve many human lives. To improve subjects’ welfare, I propose a design of RCTs that I call Experiment-as-Market (EXAM). EXAM produces a welfare-maximizing allocation of treatment-assignment probabilities, is almost incentive-compatible for preference elicitation, and unbiasedly estimates any causal effect estimable with standard RCTs. I quantify these properties by applying EXAM to a water-cleaning experiment in Kenya. In this empirical setting, compared to standard RCTs, EXAM improves subjects’ predicted well-being while reaching similar treatment-effect estimates with similar precision.

...

"RCTs involve large numbers of participants. Between 2007 and 2017, over 360 million patients and 22 million individuals participated in registered clinical trials and social RCTs, respectively. Moreover, these experiments often randomize high-stakes treatments. For instance, in a glioblastoma therapy trial (1), the 5-y death rate of glioblastoma patients was 97% in the control group, but only 88% in the treatment group. In expectation, therefore, the lives of up to 9% of the study’s 573 participants depended on who received treatments. Social RCTs also often randomize critical treatments such as basic income, high-wage job offers, and HIV testing.

"RCTs, thus, influence the fate of many people around the world, raising a widely recognized ethical concern with the randomness of RCT treatment assignment: “How can a physician committed to doing what he thinks is best for each patient tell a woman with breast cancer that he is choosing her treatment by something like a coin toss? How can he give up the option to make changes in treatment according to the patient’s responses?

...

"I propose an experimental design that I call Experiment-as-Market (EXAM). I choose this name because EXAM is an experiment based on an imaginary centralized market and its competitive equilibrium (12, 13). EXAM first endows each subject with a common artificial budget and lets her use the budget to purchase the most preferred (highest WTP) bundle of treatment-assignment probabilities given their prices. The prices are personalized so that each treatment is cheaper for subjects with better predicted effects of the treatment. EXAM computes its treatment-assignment probabilities as what subjects demand at market-clearing prices, where subjects’ aggregate demand for each treatment is balanced with its supply or capacity (assumed to be exogenously given). EXAM, finally, requires every subject to be assigned to every treatment with a positive probability.

"This virtual-market construction gives EXAM nice welfare and incentive properties. EXAM is Pareto optimal, in that no other design makes every subject better off in terms of expected predicted effects of and WTP for the assigned treatment. EXAM also allows the experimenter to elicit WTP in an asymptotically incentive-compatible way. That is, when the experimenter asks subjects to self-report their WTP for each treatment to be used by EXAM, every subject’s optimal choice is to report her true WTP, at least for large experiments.

"Importantly, EXAM also allows the experimenter to estimate the same treatment effects as standard RCTs do. Intuitively, this is because EXAM is an experiment stratified on observable predicted effects and WTP, in which the experimenter observes each subject’s assignment probabilities (propensity scores). As a result, EXAM’s treatment assignment is random (independent from anything else), conditional on the observables. The conditionally independent treatment assignment allows the experimenter to unbiasedly estimate the average treatment effects (ATEs) conditional on observables. By integrating such conditional effects, EXAM can unbiasedly estimate the (unconditional) ATE and other effects, as is the case with any stratified experiment (14)."

**********

somewhat related post:

Sunday, July 12, 2020

Tuesday, December 29, 2020

College admissions in Australia, by Guillen, Kesten, Kiefer, and Melatos

 Here's a working paper from the University of Sydney that looks at the New South Wales college admissions clearinghouse in which students receive (accurate but unclear) advice from the clearinghouse operator, together with clear but incorrect advice from individual universities.  In an experiment, they look at the effects of these different kinds of advice when presented separately and together.

A Field Evaluation of a Matching Mechanism: University Applicant Behaviour in Australia by Pablo Guillen Onur Kesten, Alexander Kiefer, and Mark Melatos  December 2020

"Abstract: The majority of undergraduate university applications in the state of New South Wales –Australia’s largest state – are processed by a clearinghouse, the Universities Admissions Centre (UAC). Applicants submit an ordered list of degree preferences to UAC which applies a matching algorithm to allocate university places to eligible applicants. The algorithm incorporates the possibility of a type of “early action” through which applicants receive guaranteed enrolments. Applicants receive advice on how to construct their degree preference list from multiple sources (including individual universities). This advice is often confusing, inconsistent with official UAC advice or simply misleading. To evaluate the policy implications of this design choice, we run a large sample (832 observations) experiment with experienced participants in a choice environment that mimics the UAC application process and in which truth telling is a dominant strategy. We vary the advice received across treatments: no advice, UAC advice only, (inaccurate) university advice only, and both UAC and university advice together. Overall, 75.5% of participants fail to use the dominant strategy. High rates of applicant manipulation persist even when applicants are provided with accurate UAC advice. We find that students who attend non-selective government schools are more prone to use strictly dominated strategies than those who attend academically selective government schools and private schools."

The matching algorithm, in which applicants are allowed to list only six choices, is described as follows:

"The algorithm used by UAC sequentially checks each applicant’s eligibility for a degree starting with her first choice. It is therefore reminiscent of the Boston mechanism widely used for school choice in the U.S. (Abdulkadiroglu et al, 2005; Abdulkadiroglu et al, 2006) and college admissions in China (Chen and Kesten, 2017) among other places. However, the absence of formal capacity constraints (on university enrolments) makes this Australian context a unique instance in which the outcome of the algorithm also coincides with that of the celebrated Deferred Acceptance (DA) algorithm of Gale and Shapley (1962). Due to this equivalence, the UAC algorithm does not inherit the strategic vulnerability of the Boston algorithm. Consequently, students are still able to construct their preferred degree list in a manner that is consistent with their true preferences.4

"While the UAC admissions system appears similar to a typical college admissions problem (see, e.g., Roth and Sotomayor, 1991 and Balinski and Sönmez, 1998), universities in NSW can influence student applications through an additional channel. To limit the uncertainty faced by applicants,5 many universities often grant applicants “guaranteed entry” options.6 These schemes represent a university’s commitment to an individualised entry requirement for a particular degree, subject to the candidate’s achievement of a certain score. This innovative feature of the UAC system can be viewed as the centralized or algorithmic embodiment of “early decision” schemes used by over two-thirds of top colleges in the US (see, e.g., Avery, Fairbanks, and Zeckhauser, 2004) that admit students through a decentralized system.7 Indeed, we are not aware of any other centralized college admissions system that has this type of feature. Under the current UAC algorithm, if an applicant includes a guaranteed entry degree in her preference list, this implies that she will not be considered for any degree that she has listed lower on her list provided that she attains the pre-announced entry score."

They conclude in part that

"accurate, albeit somewhat complicated, advice may fail to mitigate the impact of inaccurate (but straightforward) advice."