Saturday, September 28, 2019

Automatic algorithmic affirmative action, by Ashesh Rambachan and Jonathan Roth

There's been some justified concern that algorithms that make predictions and choices based on previous choices made by humans might replicate the human biases embedded in the historic data.  Below is a paper that points out that the opposite effect could happen as well.

As explained here: "Imagine a college that has historically admitted students using (biased) admissions officers, but switches to an algorithm trained on data for their past students. If the admissions officers unfairly set a higher bar for people from group A, then assuming student performance is fairly measured once students arrive on campus, students from group A will appear to be stronger than students from group B. The learned model will therefore tend to favor students from group A, in effect raising the bar for students from group B."*

Here's the paper itself, and its abstract:

Bias In, Bias Out? Evaluating the Folk Wisdom
Ashesh Rambachan, Jonathan Roth

Abstract: We evaluate the folk wisdom that algorithms trained on data produced by biased human decision-makers necessarily reflect this bias. We consider a setting where training labels are only generated if a biased decision-maker takes a particular action, and so bias arises due to selection into the training data. In our baseline model, the more biased the decision-maker is toward a group, the more the algorithm favors that group. We refer to this phenomenon as "algorithmic affirmative action." We then clarify the conditions that give rise to algorithmic affirmative action. Whether a prediction algorithm reverses or inherits bias depends critically on how the decision-maker affects the training data as well as the label used in training. We illustrate our main theoretical results in a simulation study applied to the New York City Stop, Question and Frisk dataset.
**********

* I'm reminded of the saying "To get the same reward as a man, a woman has to be twice as good.  Fortunately that's not hard..."

No comments: