114:00 — Improved Approximation Algorithms for the Joint Replenishment Problem with Outliers, and with Fairness Constraints

The joint replenishment problem (JRP) is a classical inventory management problem. We consider a natural generalization with outliers, where we are allowed to reject (that is, not service) a subset of demand points. In this paper, we are motivated by issues of fairness - if we do not serve all of the demands, we wish to ``spread out the pain'' in a balanced way among customers, communities, or any specified market segmentation. One approach is to constrain the rejections allowed, and to have separate bounds for each given customer. In our most general setting, we consider a set of C features, where each demand point has an associated rejection cost for each feature, and we have a given bound on the allowed rejection cost incurred in total for each feature. This generalizes a model of fairness introduced in earlier work on the Colorful k-Center problem in which (analogously) each demand point has a given color, and we bound the number of rejections of each color class.
We give the first constant approximation algorithms for the fairness-constrained JRP with a constant number of features; specifically, we give a 2.86-approximation algorithm in this case. Even for the special case in which we bound the total (weighted) number of outliers, this performance guarantee improves upon bounds previously known for this case. Our approach is an LP-based algorithm that splits the instance into two subinstances. One is solved by a novel iterative rounding approach and the other by pipage-based rounding. The standard LP relaxation has an unbounded integrality gap, and hence another key element of our algorithm is to strengthen the relaxation by correctly guessing key attributes of the optimal solution, which are sufficiently concise, so that we can enumerate over all possible guesses in polynomial time - albeit exponential in C, the number of features.

214:30 — Assortment Optimization with Visibility Constraints

Motivated by applications in e-retail and online advertising, we study the problem of assortment optimization under visibility constraints, that we refer to as APV. We are given a universe of substitutable products and a stream of T customers. The objective is to determine the optimal assortment of products to offer to each customer in order to maximize the total expected revenue, subject to the constraint that each product is required to be shown to a minimum number of customers. The minimum display requirement for each product is given exogenously and we refer to these constraints as visibility constraints. We assume that customer choices follow a Multinomial Logit model (MNL). We provide a characterization of the structure of the optimal assortments and present an efficient polynomial time algorithm for solving APV. To accomplish this, we introduce a novel function called the “expanded revenue” of an assortment and establish its supermodularity. Our algorithm takes advantage of this structural property. Additionally, we demonstrate that APV can be formulated as a compact linear program. Next, we consider APV with cardinality constraints, which we prove to be strongly NP-hard and not admitting a Fully Polynomial Time Approximation Scheme (FPTAS), even in the special case where all the products have identical prices. Subsequently, we devise a Polynomial Time Approximation Scheme (PTAS) for APV under cardinality constraints with identical prices. Our algorithm starts by linearizing the objective function through a carefully crafted guessing procedure, then solves the linearized program, and finally randomly rounds the obtained solution to derive a near optimal solution for APV with cardinality constraints. We also examine the revenue loss resulting from the enforcement of visibility constraints, comparing it to the unconstrained version of the problem. To offset this loss, we propose a novel strategy to distribute the loss among the products subject to visibility constraints. Each vendor is charged an amount proportional to their product’s contribution to the revenue loss. Finally, we present the results of our numerical experiments providing illustration of the obtained outcomes.

315:00 — Fairness, Randomization, and Approximation Algorithms

We discuss approaches to fair allocations/service in combinatorial optimization, wherein approximation and randomization will both be crucial. Clustering problems will be among those studied through the lens of fairness; specifically, we will show how randomized-rounding algorithms for linear-programming relaxations can lead to useful (per-user) probabilistic guarantees for a range of problems.

415:30 — Algorithmic Tools for Redistricting: Fairness via Analytics

The American winner-take-all congressional district system empowers politicians to engineer electoral outcomes by manipulating district boundaries. To date, computational solutions mostly focus on drawing unbiased maps by ignoring political and demographic input, and instead simply optimize for compactness and other related metrics. However, we maintain that this is a flawed approach because compactness and fairness are orthogonal qualities; to achieve a meaningful notion of fairness, one needs to model political and demographic considerations, using historical data.

We will discuss a series of papers that explore and develop this perspective. In the first (joint with Wes Gurnee), we present a scalable approach to explicitly optimize for arbitrary piecewise-linear definitions of fairness; this employs a stochastic hierarchical decomposition approach to produce an exponential number of distinct district plans that can be optimized via a standard set partitioning integer programming formulation. This enables a large-scale ensemble study of congressional districts, providing insights into the range of possible expected outcomes and the implications of this range on potential definitions of fairness. Further work extending this (joint with Julia Allen & Wes Gurnee), shows that many additional real-world constraints can be easily adapted in this framework (such as minimal county splits as was recently required in Alabama legislation in response to the US Supreme Court decision Milligan v. Alabama). In another paper (joint with Nikhil Garg, Wes Gurnee, and David Rothschild), we study the design of multi-member districts (MMDs) in which each district elects multiple representatives, potentially through a non-winner-takes-all voting rule (as was proposed in H.R. 4000). We carry out large-scale analyses for the U.S. House of Representatives under MMDs with different social choice functions, under algorithmically generated maps optimized for either partisan benefit or proportionality. We find that with three-member districts using Single Transferable Vote, fairness-minded independent commissions can achieve proportional outcomes in every state (up to rounding), and this would significantly curtail the power of advantage-seeking partisans to gerrymander.