114:00 — Fair Secretaries with Unfair Predictions

Algorithms with predictions, also called learning-augmented algorithms, is a recent framework for decision-making that leverages the power of machine-learned predictions without making any assumption about their quality. The goal is to improve the algorithm's performance when predictions are accurate while maintaining acceptable guarantees when predictions are erroneous. Due to challenges with fairness in machine learning, a serious concern is that learning-augmented algorithms could leverage biased predictions and, as a result, make unfair decisions. We study the classical secretary problem in the learning-augmented setting, and prove that such a concern is indeed legitimate. Current learning-augmented algorithms for that problem are initially given predictions about the candidates’ values and select a candidate that is guaranteed to have value that is a max{Theta(1), 1 - O(epsilon)} approximation to the maximum value, where epsilon is the approximation error. However, these algorithms may pick the best candidate with vanishing probability. Our main result is a novel algorithm that achieves a max{Theta (1) , 1 - O(epsilon)} approximation while also guaranteeing that the best candidate is selected with constant probability. We also extend our result to the k-secretary problem and complement our theoretical analysis with experiments.

214:30 — Incremental Topological Ordering and Cycle Detection with Predictions

This paper leverages the framework of algorithms-with-predictions to design data structures for two fundamental dynamic graph problems: incremental topological ordering and cycle detection. In these problems, the input is a directed graph on n nodes, and the m edges arrive one by one. The data structure must maintain a topological ordering of the vertices at all times and detect if the newly inserted edge creates a cycle. The theoretically best worst-case algorithms for these problems have high update cost (polynomial in n and m). In practice, greedy heuristics (that recompute the solution from scratch each time) perform well but can have high update cost in the worst case.

In this paper, we bridge this gap by leveraging predictions to design a learned new data structure for the problems. Our data structure guarantees consistency, robustness, and smoothness with respect to predictions -- that is, it has the best possible running time under perfect predictions, never performs worse than the best-known worst-case methods, and its running time degrades smoothly with the prediction error. Moreover, we demonstrate empirically that predictions, learned from a very small training dataset, are sufficient to provide significant speed-ups on real datasets

315:00 — Algorithms with Prediction Portfolios

The research area of algorithms with predictions has seen recent success showing how to incorporate machine learning into algorithm design to improve performance when the predictions are correct, while retaining worst-case guarantees when they are not. Most previous work has assumed that the algorithm has access to a single predictor. However, in practice, there are many machine learning methods available, often with incomparable generalization guarantees, making it hard to pick a best method a priori. In this work we consider scenarios where multiple predictors are available to the algorithm and the question is how to best utilize them.

Ideally, we would like the algorithm's performance to depend on the quality of the best predictor. However, utilizing more predictions comes with a cost, since we now have to identify which prediction is the best. We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling, which have been well-studied in the single predictor setting. For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.

415:30 — Energy-Efficient Scheduling with Predictions

An important goal of modern scheduling systems is to efficiently manage power usage. In energy-efficient scheduling, the operating system controls the speed at which a machine is processing jobs with the dual objective of minimizing energy consumption and optimizing the quality of service cost of the resulting schedule. Since machine-learned predictions about future requests can often be learned from historical data, a recent line of work on learning-augmented algorithms aims to achieve improved performance guarantees by leveraging predictions. In particular, for energy-efficient scheduling, Bamas et. al. and Antoniadis et. al. designed algorithms with predictions for the energy minimization with deadlines problem and achieved an improved competitive ratio when the prediction error is small while also maintaining worst-case bounds even when the prediction error is arbitrarily large.
In this talk, we consider a general setting for energy-efficient scheduling and provide a flexible learning-augmented algorithmic framework that takes as input an offline and an online algorithm for the desired energy-efficient scheduling problem. We show that, when the prediction error is small, this framework gives improved competitive ratios for many different energy-efficient scheduling problems, including energy minimization with deadlines, while also maintaining a bounded competitive ratio regardless of the prediction error. Finally, we empirically demonstrate that this framework achieves an improved performance on real and synthetic datasets.