Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts

Friday, March 8, 2019

Why is it hard for securities exchanges to restore price competition (instead of speed competition)?

Many stock exchanges earn rents by giving privileged access to high speed algorithmic traders.  Why doesn't a new exchange enter the market with a design that would privilege price competition over speed competition?  Budish, Lee and Shim have some thoughts on that:

Will the Market Fix the Market?A Theory of Stock Exchange Competition and Innovation
 Eric Budish, Robin S. Lee and John J. Shim
February 27, 2019

Abstract As of early 2019, there are 13 stock exchanges in the U.S., across which over 1 trillion shares ($50 trillion) are traded annually. All 13 exchanges use the continuous limit order book market design, a design that gives rise to latency arbitrage—arbitrage rents from symmetrically observed public information—and the associated high-frequency trading arms race (Budish, Cramton and Shim, 2015). Will the market adopt new market designs that address the negative aspects of high-frequency trading? This paper builds a theoretical model of stock exchange competition to answer this question. Our model, shaped by institutional details of the U.S. equities market, shows that under the status quo market design: (i) trading behavior across the many distinct exchanges is as if there is just a single “synthesized” exchange; (ii) competition among exchanges is fierce on the dimension of traditional trading fees; but (iii) exchanges capture and maintain significant economic rents from the sale of speed technology—arms for the arms race. Using a variety of data, we document seven stylized empirical facts that align with these predictions. We then use the model to examine the private and social incentives for market design innovation. We show that the market design adoption game among exchanges is a repeated prisoner’s dilemma. If an exchange adopts a new market design that eliminates latency arbitrage, it would win share and earn economic rents; perhaps surprisingly, the usual coordination problems associated with getting a new market design off the ground are not central. However, imitation by other exchanges would result in an equilibrium that resembles the status quo with competitive trading fees, but now without the rents from the speed race. We conclude that although the social returns to adoption are large, the private returns are much smaller for an entrant exchange and negative for an incumbent that currently derives rents from the inefficiencies that the new design eliminates. Nevertheless, our analysis does not imply that a market-wide market design mandate is necessary. Rather, it points to a more circumscribed policy response that would tip the balance of incentives and encourage the “market to fix the market.” 

Tuesday, September 18, 2018

A school choice mechanism from Taiwan, by Dur, Pathak, Song and Sönmez

Deduction Dilemmas: The Taiwan Assignment Mechanism

Umut M. DurParag A. PathakFei SongTayfun Sönmez

NBER Working Paper No. 25024
Issued in September 2018
NBER Program(s):Economics of EducationLabor Studies 
This paper analyzes the properties of the Taiwan mechanism, used for high school placement nationwide starting in 2014. In the Taiwan mechanism, points are deducted from an applicant's score with larger penalties for lower ranked choices. Deduction makes the mechanism a new hybrid between the well-known Boston and deferred acceptance mechanisms. Our analysis sheds light on why Taiwan's new mechanism has led to massive nationwide demonstrations and why it nonetheless still remains in use.


 "Loosely  speaking,  in  Taiwan’s deduction system, a students priority is reduced based on the order in which preferences are ranked.  Table 1 lists the schedule of points employed in 2014 across the largest districts in Taiwan.  For instance, in Jibei, the largest district with over 60,000 applicants, a student’s score at her second choice is reduced by 1, the score at her third choice is reduced by 2, and so on.  In Yunlin district, no points are deducted for the first four choices, and 2 points are 3 deducted from choices five through eight. The deduction system has now been in place for the last five years, from 2014-2018."

This system of deductions is used to determine each student's priority at each school, and allocations are then determined, by a deferred acceptance algorithm, based on student preferences and these school priorities.

Monday, February 19, 2018

Making algorithms fair

Fairness is elusive, even in algorithms. "Color blindness" doesn't do the trick, since minority populations may differ in other ways from majority populations.
My attention was drawn to this news story  about ongoing work by U. Penn computer scientists: Combatting ‘Fairness Gerrymandering’ with Socially Conscious Algorithms

And here's the article on which the story is based:

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier across these groups. Constraints of this form are susceptible to intentional or inadvertent "fairness gerrymandering", in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness and recently proposed individual notions of fairness, but raises several computational challenges. It is no longer clear how to audit a fixed classifier to see if it satisfies such a strong definition of fairness. We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
We then derive two algorithms that provably converge to the best fair classifier, given access to oracles which can solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner and an Auditor. Our first algorithm provably converges in a polynomial number of steps. Our second algorithm enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the simpler algorithm using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.

Friday, November 17, 2017

"Open algorithms" law proposed in New York City

Here is a bill that has been proposed (and sent to committee) by the New York City Council:
Automated processing of data for the purposes of targeting services, penalties, or policing to persons.
Title: A Local Law to amend the administrative code of the city of New York, in relation to automated processing of data for the purposes of targeting services, penalties, or policing to persons
Summary:This bill would require agencies that use algorithms or other automated processing methods that target services, impose penalties, or police persons to publish the source code used for such processing. It would also require agencies to accept user-submitted data sets that can be processed by the agencies’ algorithms and provide the outputs to the user.


Monday, November 28, 2016

Fairness for Digital Infrastructure conference, January 19-20, 2017, at Penn

Fairness for Digital Infrastructure, January 19-20, 2017, UPenn, Philadelphia PA

"A large part of our digital infrastructure is designed to automate decision making, and ideally should improve economic efficiency. Automated decisions now govern, among many other things, whether we are approved for credit cards, what advertisements we are shown, and what search results we see. Algorithms determine the government’s perception of an individual’s risk (e.g. at airport security) or trustworthiness (e.g. in evaluating recidivism risk for parole decisions). The aim of the workshop is to better understand issues surrounding “unfairness” associated with the use of machine learning and automated decision making. This includes the frictions that are the cause of such inadvertent unfairness, and novel technical solutions to solve these problems.
This workshop will take place at the University of Pennsylvania in Philadelphia on January 19th and 20th 2017.
Co-organizers: Sampath Kannan, Jamie Morgenstern, Mallesh Pai, Aaron Roth, Rakesh Vohra"


Sunday, November 27, 2016

An interview with computer scientist Cynthia Dwork

Quanta Magazine, a publication of the Simons foundation, has an interview with Cynthia Dwork on differential privacy and fairness among other things.

How to Force Our Machines to Play Fair
The computer scientist Cynthia Dwork takes abstract concepts like privacy and fairness and adapts them into machine code for the algorithmic age.

Here are some earlier news stories about Apple's introduction of differential privacy to the iPhone,  which I've been following for a number of reasons.

From TechCrunch: What Apple’s differential privacy means for your data and the future of machine learning

From Wired: Apple’s ‘Differential Privacy’ Is About Collecting Your Data—But Not ​Your Data

Apple's differential privacy analyzes the group, protects the individual
Posted on June 21, 2016 

Monday, November 7, 2016

Sociology of high frequency trading

The Journal Economy and Society has a special issue on Cultures of High-Frequency Trading, which includes the following articles:


*************
A related working paper I enjoyed reading, on Donald MacKenzie's website:

How Algorithms Interact: Goffman’s ‘Interaction Order’ in Automated Trading
Donald MacKenzie
April 2016

Tuesday, September 20, 2016

Stable matching for university admissions in Vietnam?

Here's a proposal (in Vietnamese) to use a deferred acceptance algorithm to organize university admissions in Vietnam:

Kỳ thi THPT Quốc Gia 2016
Về việc áp dụng thuật toán DAA của Gale-Shapley trong xét tuyển – Đối Thoại Giáo Dục

Google translate renders the headline this way:
"National high school exams in 2016
On application of the algorithm of Gale-Shapley DAA in admissions - Dialogue Education"

Sunday, February 14, 2016

A Valentine's day salute to Gale and Shapley and the deferred acceptance algorithm

The University of California celebrates David Gale and Lloyd Shapley with an article appropriate to Valentine's Day:

How a matchmaking algorithm saved lives
Long before dating sites, a pair of economists delved into the question of matchmaking, and hit upon a formula with applications far beyond romance.

The article has Jane Austen characters in the explanation of the deferred acceptance algorithm, and pictures, including these:



They also link to this animated, interactive Berkeley mathsite exhibit on the stable marriage problem. (It is part of a larger MathSite interactive mathematics exhibit supported by the David Gale Fund for Interactive Mathematics.)
*********************

Valentine's day reminds me of a post from a few years ago:

What has G-d been doing since the Creation? (Matchmaking, of course...))


Happy Valentine's day to all, from Philadelphia:)

Tuesday, December 22, 2015

Videos of the Simons Institute lectures on Algorithmic Game Theory and Practice

The archived videos from Algorithmic Game Theory and Practice Workshop at the Simons Institute for the Theory of Computing can be found by clicking on the titles of individual talks on the schedule (below) or on the Simons Institute YouTube channel.

9:30 am – 10:10 am
10:10 am – 10:40 am
Break
10:40 am – 11:20 am
11:20 am – 12:00 pm
12:00 pm – 1:30 pm
Lunch
1:30 pm – 2:10 pm
2:10 pm – 2:50 pm
2:50 pm – 3:20 pm
Break
3:20 pm – 4:00 pm
4:00 pm – 5:00 pm
Reception
Tuesday, November 17th, 2015
9:00 am – 9:30 am
Coffee & Check-In
9:30 am – 10:10 am
10:10 am – 10:40 am
Break
10:40 am – 11:20 am
11:20 am – 12:00 pm
12:00 pm – 1:30 pm
Lunch
1:30 pm – 2:10 pm
2:10 pm – 2:40 pm
Break
2:40 pm – 4:00 pm
Wednesday, November 18th, 2015
9:00 am – 9:30 am
Coffee & Check-In
9:30 am – 10:10 am
10:10 am – 10:40 am
Break
10:40 am – 11:20 am
11:20 am – 12:00 pm
12:00 pm – 1:30 pm
Lunch
1:30 pm – 2:10 pm
2:10 pm – 2:50 pm
2:50 pm – 3:20 pm
Break
3:20 pm – 4:00 pm
Thursday, November 19th, 2015
9:00 am – 9:30 am
Coffee & Check-In
9:30 am – 10:10 am
10:10 am – 10:40 am
Break
10:40 am – 11:20 am
11:20 am – 12:00 pm
12:00 pm – 1:30 pm
Lunch
1:30 pm – 2:10 pm
2:10 pm – 2:50 pm
2:50 pm – 3:20 pm
Break
3:20 pm – 4:20 pm
Open Directions Session
Friday, November 20th, 2015
9:00 am – 9:30 am
Coffee & Check-In
9:30 am – 10:10 am
10:10 am – 10:40 am
Break
10:40 am – 11:20 am
11:20 am – 12:00 pm