Quanta magazine reports on a recent paper on algorithmic collusion (in which a big class of "dumb" strategies can settle on high prices):
The Game Theory of How Algorithms Can Drive Up Prices
Recent findings reveal that even simple pricing algorithms can make things more expensive by Ben Brubaker
" how can regulators ensure that algorithms set fair prices? Their traditional approach won’t work, as it relies on finding explicit collusion. “The algorithms definitely are not having drinks with each other,” said Aaron Roth, a computer scientist at the University of Pennsylvania.
...
" if you want to guarantee fair prices, why not just require sellers to use algorithms that are inherently incapable of expressing threats?
"In a recent paper, Roth and four other computer scientists showed why this may not be enough. They proved that even seemingly benign algorithms that optimize for their own profit can sometimes yield bad outcomes for buyers. “You can still get high prices in ways that kind of look reasonable from the outside,” said Natalie Collina, a graduate student working with Roth who co-authored the new study.
...
"“Without some notion of a threat or an agreement, it’s very hard for a regulator to come in and say, ‘These prices feel wrong,’” said Mallesh Pai, an economist at Rice University. “That’s one reason why I think this paper is important.”
...
"So, what can regulators do? Roth admits he doesn’t have an answer. It wouldn’t make sense to ban no-swap-regret algorithms: If everyone uses one, prices will fall. But a simple nonresponsive strategy might be a natural choice for a seller on an online marketplace like Amazon, even if it carries the risk of regret.
“One way to have regret is just to be kind of dumb,” Roth said. “Historically, that hasn’t been illegal.”
#######
And here's the paper:
Algorithmic Collusion Without Threats
There has been substantial recent concern that pricing algorithms might learn to ``collude.'' Supra-competitive prices can emerge as a Nash equilibrium of repeated pricing games, in which sellers play strategies which threaten to punish their competitors who refuse to support high prices, and these strategies can be automatically learned. In fact, a standard economic intuition is that supra-competitive prices emerge from either the use of threats, or a failure of one party to optimize their payoff. Is this intuition correct? Would preventing threats in algorithmic decision-making prevent supra-competitive prices when sellers are optimizing for their own revenue? No. We show that supra-competitive prices can emerge even when both players are using algorithms which do not encode threats, and which optimize for their own revenue. We study sequential pricing games in which a first mover deploys an algorithm and then a second mover optimizes within the resulting environment. We show that if the first mover deploys any algorithm with a no-regret guarantee, and then the second mover even approximately optimizes within this now static environment, monopoly-like prices arise. The result holds for any no-regret learning algorithm deployed by the first mover and for any pricing policy of the second mover that obtains them profit at least as high as a random pricing would -- and hence the result applies even when the second mover is optimizing only within a space of non-responsive pricing distributions which are incapable of encoding threats. In fact, there exists a set of strategies, neither of which explicitly encode threats that form a Nash equilibrium of the simultaneous pricing game in algorithm space, and lead to near monopoly prices. This suggests that the definition of ``algorithmic collusion'' may need to be expanded, to include strategies without explicitly encoded threats.
No comments:
Post a Comment