A new paper by three computer scientists and an economist reports on some connections between privacy and incentive compatibility.
MECHANISM DESIGN IN LARGE GAMES: INCENTIVES AND PRIVACY
by
MICHAEL KEARNS, MALLESH M. PAI, AARON ROTH and JONATHAN ULLMAN
July 18, 2012
ABSTRACT
We study the design of mechanisms satisfying two desiderata— incentive compatibility and privacy. The first, requires that each agent should be incentivized to report her private information truthfully. The second, privacy, requires the mechanism not reveal ‘much’ about any agent’s type to other agents. We propose a notion of privacy we call Joint Differential Privacy. It is a variant of Differential Privacy, a robust notion of privacy used in the Theoretical Computer Science literature. We show by construction that such mechanisms, i.e. ones which are both incentive compatible and jointly differentially private exist when the game is ‘large’, i.e. there are a large number of players, and any player’s action affects any other’s payoff by at most a small amount. Our mechanism adds carefully selected noise to no-regret algorithms similar to those studied in Foster-Vohra [FV97] and Hart-Mas-Colell [HMC00]. It therefore implements an approximate correlated equilibrium of the full information game induced by players’ reports.
*********
As I understand it, adding appropriate randomness to regret learning algorithms doesn’t harm their long term equilibration properties, and gives them good privacy properties, which together give them good incentive properties.
1 comment:
Any players action concerns any other bribe by at most a little amount,
Post a Comment