Past Event

The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

December 10, 2019
4:00 PM - 5:30 PM
School of Social Work, 1255 Amsterdam Ave., New York, NY 10027 C05

The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes — like race, gender, and their proxies — are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. In this talk, I'll show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, I'll argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area.

Presenter Biography

Sharad's primary area of research is computational social science, an emerging discipline at the intersection of computer science, statistics, and the social sciences. He's particularly interested in applying modern computational and statistical techniques to study social and political policies, such as stop-and-frisk, swing voting, filter bubbles, do-not-track, and media bias. Before joining Stanford, Sharad was a senior researcher at Microsoft Research and Yahoo Labs.

Contact Information

Columbia Population Research Center