Monday, October 11, 2010

Results of Games 1st choice prediction competition

Here's the end (or maybe the middle) of the story that began with Predicting behavior in games: a competition Oct 05 08:16AM +0200 ^

Hi: We write to inform you of the results of Games 1st choice prediction
competition. The competition focused on the prediction of behavior in repeated
Market Entry Game. We ran two sets of experiments. We published the results of
the first set, and challenged other researchers to predict the result of the
second set (see

Twenty-two different teams participated in the competition. The total number of
submissions was 25.

The winners are Wei Chen, Chih-Han Chen, Yi-Shan Lee, and Shu-Yu Liu from
National Taiwan University.

The runners up are Tomבs Lejarraga, Varun Dutt, and Cleotilde Gonzalez from
Carnegie Mellon University.

The winners and the runners up were invited to submit papers to Games that
describe their models in detail. Here is a short summary:

The winning model refines I-SAW (the best baseline model described in the
competition website) by the addition of the assumption of a limited memory
span. The refined model assumes: (1) Reliance on a small sample of past
experiences, (2) Strong inertia and recency effects, and (3) Surprise triggers

The runner up model is based upon the Instance Based Learning (IBL) theory
proposed by Gonzalez, Lerch, and Lebiere (2003). The basic assumptions of this
model are: retrieval of past set of experiences of outcomes weighted by their
probability of retrieval from memory (i.e., blending mechanism); dependence on
recency and frequency of past experienced outcomes; and, an inertia mechanism
that depends upon surprise as a function of blended outcomes.

The results support two main suggestions:

(1) Models that assume reliance on small samples of past experiences have a
large advantage over models that assume reliance on running averages of the
previous payoffs (like traditional reinforcement learning and fictitious play

(2) The difference between learning in market entry games, and learning in
individual choice tasks is not large. Indeed, the best models in the current
competition can be described as refinements of the best models in our previous
competition that focused in individual repeated choice task (see Erev et al.,

The raw data from the 80 repeated market entry games that were run in the
current competition can be found in
The raw data from the 120 repeated choice problems that were run in our
previous competition can be found in We encourage you to use these
data sets, to improve our understanding of the effect of experience on economic

No comments: