108:30 — Is There an Accuracy-Simplicity Tradeoff in High-Stakes Decision Domains?

Finding optimal, sparse, accurate models of various forms (such as linear models with integer coefficients, rule lists, and decision trees) is generally NP-hard. Often, we do not know whether the search for a simpler model will be worthwhile, and thus we do not undertake the effort to find one. This talk addresses an important practical question: for which types of datasets would we expect interpretable models to perform as well as black-box models? I will present a mechanism of the data generation process, coupled with choices usually made by the analyst during the learning process, that leads to the existence of simpler-yet-accurate models. This mechanism indicates that such models exist in practice more often than one might expect. 

209:00 — Learning Interpretable and Fair Housing Allocation Policy for Individuals Experiencing Homelessness in Los Angeles

We consider the problem of learning an optimal, highly interpretable, and fair policy for allocating scarce housing resources to people experiencing homelessness in Los Angeles. Using administrative data from the Homeless Management Information System, we propose a way to create simple and actionable policies in the form of prescriptive trees that can satisfy arbitrary fairness constraints or other domain specific requirements that stakeholders care about. In the case of enforcing fairness in outcomes, our policies improve outcomes by up to 2.3\% and enhance fairness by up to 36.57\%. When enforcing fairness in allocation, we achieve up to a 2.3\% better outcome and increase fairness by up to 100\%, i.e., eliminating all the discrimination. This paper is based on an ongoing collaboration with the Los Angeles Homeless Services Authority (LAHSA).

309:30 — The Measure and Mismeasure of Fairness

The field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last decade, several formal, mathematical definitions of fairness have gained prominence. Here we first assemble and categorize these definitions into two broad families: (1) those that constrain the effects of decisions on disparities; and (2) those that constrain the effects of legally protected characteristics, like race and gender, on decisions. We then show, analytically and empirically, that both families of definitions typically result in strongly Pareto dominated decision policies. For example, in the case of college admissions, adhering to popular formal conceptions of fairness would simultaneously result in lower student-body diversity and a less academically prepared class, relative to what one could achieve by explicitly tailoring admissions policies to achieve desired outcomes. In this sense, requiring that these fairness definitions hold can, perversely, harm the very groups they were designed to protect. In contrast to axiomatic notions of fairness, we argue that the equitable design of algorithms requires grappling with their context-specific consequences, akin to the equitable design of policy. We conclude by listing several open challenges in fair machine learning and offering strategies to ensure algorithms are better aligned with policy goals.

410:00 — f-FERM: Robust and Fair Empirical Risk Minimization via f-divergences

Training and deploying machine learning models that meet fairness criteria for protected groups are fundamental in modern artificial intelligence.
While numerous constraints and regularization terms have been proposed in the literature to promote fairness in machine learning tasks, most of these approaches are not amenable to stochastic optimization due to the complex and nonlinear structure of constraints and regularizers. Here, the term ``stochastic'' refers to the ability of the algorithm to work with small mini-batches of data. Motivated by the limitation of existing literature, we present a unified stochastic optimization framework for fair empirical risk minimization based on $f$-divergence measures ($f$-FERM). The proposed stochastic algorithm enjoys theoretical convergence guarantees. In addition, our experiments demonstrate the superiority of fairness-accuracy tradeoffs offered by $f$-FERM for almost all batch sizes (ranging from full-batch to batch size of one). Moreover, we show that our framework can be extended to the case where there is a distribution shift from training to the test data.
Our extension is based on a distributionally robust optimization reformulation of $f$-FERM objective under $\ell_p$ norms as uncertainty sets. Again, in this distributionally robust setting, $f$-FERM not only enjoys theoretical convergence guarantees but also outperforms other baselines in the literature in the tasks involving distribution shifts.