Showing posts with label fairness. Show all posts
Showing posts with label fairness. Show all posts

Sunday, June 10, 2018

Computing fairness

One of the areas in which computer science and economics touch each other more than a bit is in understanding what constitutes a fair way of allocating scarce resources.  This was the subject of a recent Northwestern CS workshop:

Quarterly Theory Workshop: Algorithmic Fairness

"Synopsis: As algorithmic systems have increasingly been deployed to make consequential decisions, it is becoming urgent that we grapple with the problem of (un)fairness and discrimination. These are delicate issues — to think rigorously about them, we first need to figure out how to formally define what we want, and then reason about how we might go about achieving our goals algorithmically — and what tradeoffs we will have to manage. This workshop focuses on recent advances in the theory of algorithmic fairness: both on foundational definitional questions, and on algorithmic techniques. The speakers are Nicole Immorlica (MSR), Jon Kleinberg (Cornell), Omer Reingold (Stanford), and Aaron Roth (U. of Pennsylvania).
The technical program of this workshop is organized by Aaron Roth and Jason Hartline."

These were the talks:
Speaker: Jon Kleinberg
Title: Algorithmic Fairness Criteria for Evaluating Individuals and Sets
Abstract: Recent discussion in the public sphere about classification by algorithms has involved tension between competing notions of what it means for such a classification to be fair to different groups. We consider several of the key fairness conditions that lie at the heart of these debates, and discuss recent research establishing inherent trade-offs between these conditions. We also consider a variety of methods for promoting fairness and related notions for classification and selection problems that involve sets rather than just individuals.
This talk is based on joint work with Sendhil Mullainathan, Manish Raghavan, and Maithra Raghu.
Speaker: Aaron Roth
Title: Preventing Fairness Gerrymandering in Machine Learning
Abstract: The most prevalent notions of fairness in machine learning are statistical notions: they fix a small collection of high-level, pre-defined groups (such as race or gender), and then ask for approximate parity of some statistic of the classifier (like positive classification rate or false positive rate) across these groups. Constraints of this form are susceptible to (intentional or inadvertent) fairness gerrymandering, in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes (such as certain combinations of protected attribute values). Individual notions of fairness avoid this problem, but at the cost of needing to make often unrealistic assumptions about the setting. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. We will show how to achieve this in polynomial time, under the assumption that we are given oracles for solving the unconstrained learning problems (for both the set of classifiers to be learned, and the set of subgroups to be protected). Our algorithm casts the problem as solving for an equilibrium in a zero sum game, and observes that learning oracles are enough to efficiently play no-regret learning dynamics. We then demonstrate experimentally that our proposed algorithms are practical, by investigating their performance on several real datasets when instantiated with learning heuristics in place of oracles.
This talk is based on joint work with Michael Kearns, Seth Neel, and Steven Wu.
Speaker: Omer Reingold
Title: On Algorithmic Fairness Between Groups and Individuals 
Abstract: As algorithms increasingly inform and influence decisions made about individuals, it is increasingly important to address concerns that these algorithms might be discriminatory. Historically, definitions of fairness fell into one of two extremes: (1) broad-strokes statistical guarantees; (2) individual-level protections. Statistical guarantees tend to be much easier to satisfy (information and complexity theoretically), but tend to be much weaker in the protections they provide. We will discuss two recent works towards bridging the gap between statistical and individual protections. These works provide efficient learning algorithms that also ensure every subpopulation within some rich class is protected (according to some fairness notion).
One of the notions we will discuss is that of multicalibration — a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. The second notion studies fair classification within the versatile framework of Dwork et al. [ITCS 2012], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike previous works on metric-based fairness, we study the setting where a learning algorithm does not know the entire metric, but can query similarities between particular pairs of individuals a bounded number of times. We note that we do not make any assumption on the metric and are still able to obtain meaningful protection from systemic discrimination that we refer to as “metric multifairness.”
The talk will be focused on the various ways in which algorithmic fairness can be defined but will also elude to some of the ways in which it can be obtained. It is based on joint works with Úrsula Hébert-Johnson, Michael P. Kim and Guy Rothblum.
Speaker: Nicole Immorlica
Title: Fair Classification and Learning
Abstract:  Classification, learning, and optimization algorithms are increasingly used to make decisions of social consequence. An automated resume reader classifies applicants, deciding which are worth interviewing. A learning algorithm estimates the impact of a drug on health outcomes. A facility location algorithm chooses where to locate new warehouses. For these algorithms to be useful, practitioners strive to maximize their accuracy and optimality. However, this can result in differential treatment across population subgroups, resulting in a sense of inequity. Fairness metrics are used to measure the degree of inequity. In this talk, we explore how to jointly optimize fairness and accuracy using, as a black box, existing algorithms that optimize accuracy or optimality. Our first solution is tailored to classification tasks and proposes decoupling the classification task, using different classifiers for each subgroup. Our second solution can optimize any smooth and continuous function of accuracy and fairness in classification, learning, and optimization settings. It further has the advantage that subgroup membership, while used to train, is not used at run time. This additional power comes with the cost of added computational complexity in training.

Monday, February 19, 2018

Making algorithms fair

Fairness is elusive, even in algorithms. "Color blindness" doesn't do the trick, since minority populations may differ in other ways from majority populations.
My attention was drawn to this news story  about ongoing work by U. Penn computer scientists: Combatting ‘Fairness Gerrymandering’ with Socially Conscious Algorithms

And here's the article on which the story is based:

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier across these groups. Constraints of this form are susceptible to intentional or inadvertent "fairness gerrymandering", in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness and recently proposed individual notions of fairness, but raises several computational challenges. It is no longer clear how to audit a fixed classifier to see if it satisfies such a strong definition of fairness. We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
We then derive two algorithms that provably converge to the best fair classifier, given access to oracles which can solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner and an Auditor. Our first algorithm provably converges in a polynomial number of steps. Our second algorithm enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the simpler algorithm using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.

Tuesday, August 22, 2017

Global Kidney Exchange: a reply to some skeptics, forthcoming in the AJT

Our recent paper in the American Journal of Transplantation about the first Global Kidney Exchange chain was accompanied by a skeptical editorial, and has now also drawn some letters to the editor.  Our reply is forthcoming, and is now online here:

Global Kidney Exchange: Financially Incompatible Pairs Are Not Transplantable Compatible Pairs
Authors
M. A. Rees, S. Paloyo, A. E. Roth, K. D. Krawiec, O. Ekwenna, C. L. Marsh, A. J. Wenig, T. B. Dunn.   Accepted manuscript online: 31 July 2017

Here is the full text pdf file.

Here are the first sentences and the main paragraph.

"Honest debate makes ideas better; we appreciate our colleagues’ engagement. We agree with Wiseman and Gill that Global Kidney Exchange (GKE) must be conducted in an ethical manner that is sensitive to the possibilities of commodification and exploitation and, that it is important to be both careful with and transparent about how patient-donor pairs are selected from developing countries.1,2 We further agree that GKE should continue to be run in a way that enhances rather than competes with local medical services. However, Wiseman and Gill approached GKE from their American and Canadian perspective of near universal access to healthcare for end stage renal disease. They view GKE through a lens of commodification and exploitation...
...
"Let us be clear: without GKE the Filipino husband was never going to receive his spouse’s kidney. Without GKE, the husband was going to die, the wife was going to lose her spouse, and their son was going to be fatherless. That is exactly how the story was going to end without GKE. The goal of GKE is to change this fate for emotionally-related pairs referred by our medical collaborators in their home country when financial barriers prevent transplantation. Our selection process aims to provide a transplant for every GKE-eligible pair that creates enough savings to pay for a GKE transplant.  It is not scalable to propose that GKE could take place without consideration of the savings produced by transplanting patients in the United States. There are not unlimited philanthropic resources available to overcome the needs of patients facing financial barriers to transplantation. By creating and utilizing a portion of the savings produced by reducing the cost of dialysis in the United States through accelerated access to renal transplantation, GKE becomes scalable. However, the net savings produced by GKE must exceed the overall cost in order for US-based healthcare payers to participate. Thus, if we want to achieve GKE’s first goal: to help impoverished patients by overcoming financial barriers to transplantation, GKE must take account of the savings produced. We have now performed four GKE transplants—all funded by philanthropy. We simply evaluated every patient who presented for evaluation and moved forward with every instance where the projected savings from accelerated transplantation of American incompatible pairs in the Alliance for Paired Donation (APD) pool exceeded the cost of the GKE by an amount greater than the anticipated cost. To scale this concept, we are working to produce an ethical and legal process, built on sound business principles, so that it can scale to help as many rich and poor patients as possible. In this first case, an easy-to-match unsensitized blood type A GKE candidate with a blood type O donor easily produced more transplants/savings in the APD pool than without their participation.  No alternative existed for this Filipino pair and millions more like them.3 GKE did not exploit this Filipino couple—it provided the mechanism for the wife to literally save her husband’s life. They could not afford dialysis. Two months prior to travelling to the US and after their identification and evaluation for participation in GKE, their Filipino physician called to say that if the APD did not pay for the husband’s continued dialysis in the Philippines, that he was going to die as no additional funds were available to pay for dialysis. At a societal level, did American patients with access to dialysis really disproportionally benefit from the APD’s “exploitation” of this patient by paying for two months of dialysis in the Philippines? When the husband lived instead of dying, was the Filipino donor’s kidney really undervalued? We ask Wiseman and Gill to seriously consider whether the Filipino wife feels she disproportionately benefited American patients rather than her own family. For three years on Father’s Day the couple’s child has written our team to thank us for saving his daddy’s life. Two and a half years after this first GKE transplant, both the Filipino donor and recipient have normal renal function, countering the editorial’s accusation that “limited post-transplant care provided to the Filipino recipient were probably inequitable.” While the gratifying success of the first case does not guarantee the same outcome for all future patients, it does demonstrate how GKE—even if inequitable—is able to add years of life to patients who would have died without it."
******

There's a lot more that can be said, and I have an inkling that I'm going to have the opportunity to say a lot more, as there are going to be more critiques and objections.  Issues of repugnance deserve to be taken seriously.  The many positive responses (like this and this from Mexico) that GKE has received gives me cautious hope that we'll be able to move forward in a way that addresses the chief concerns and  commands broad support.  There are lots of families in which someone has kidney failure whose life could be saved by giving them access to a transplant through kidney exchange.
**********
Here are all my posts on Global Kidney Exchange

Monday, November 28, 2016

Fairness for Digital Infrastructure conference, January 19-20, 2017, at Penn

Fairness for Digital Infrastructure, January 19-20, 2017, UPenn, Philadelphia PA

"A large part of our digital infrastructure is designed to automate decision making, and ideally should improve economic efficiency. Automated decisions now govern, among many other things, whether we are approved for credit cards, what advertisements we are shown, and what search results we see. Algorithms determine the government’s perception of an individual’s risk (e.g. at airport security) or trustworthiness (e.g. in evaluating recidivism risk for parole decisions). The aim of the workshop is to better understand issues surrounding “unfairness” associated with the use of machine learning and automated decision making. This includes the frictions that are the cause of such inadvertent unfairness, and novel technical solutions to solve these problems.
This workshop will take place at the University of Pennsylvania in Philadelphia on January 19th and 20th 2017.
Co-organizers: Sampath Kannan, Jamie Morgenstern, Mallesh Pai, Aaron Roth, Rakesh Vohra"


Sunday, November 27, 2016

An interview with computer scientist Cynthia Dwork

Quanta Magazine, a publication of the Simons foundation, has an interview with Cynthia Dwork on differential privacy and fairness among other things.

How to Force Our Machines to Play Fair
The computer scientist Cynthia Dwork takes abstract concepts like privacy and fairness and adapts them into machine code for the algorithmic age.

Here are some earlier news stories about Apple's introduction of differential privacy to the iPhone,  which I've been following for a number of reasons.

From TechCrunch: What Apple’s differential privacy means for your data and the future of machine learning

From Wired: Apple’s ‘Differential Privacy’ Is About Collecting Your Data—But Not ​Your Data

Apple's differential privacy analyzes the group, protects the individual
Posted on June 21, 2016