114:00 — Fast Online Node Labeling for Very Large Graphs

We consider the online node labeling problem over very large graphs, where the prediction requires computing a large matrix inverse, with sparsity dictated by the graph structure. In particular, we consider the regime where the number of nodes is so large that common sparsifiers and repeated multiplications are still computationally intractable. Here, we examine local methods, such as the approximate Page-Rank method, whose computational complexity is altogether independent of graph size. Using such methods, we recompute the regret bounds for online learning under this more realistic computational framework, and propose a framework for its acceleration.

214:30 — Semi-Bandit Learning for Monotone Stochastic Optimization

Stochastic optimization is a widely used approach for optimization under uncertainty, where uncertain input parameters are modeled by random variables. Exact or approximation algorithms have been obtained for several fundamental problems in this area. However, a significant limitation of this approach is that it requires full knowledge of the underlying probability distributions. Can we still get good (approximation) algorithms if these distributions are unknown, and the algorithm needs to learn them through repeated interactions? We resolve this question for a large class of ``monotone'' stochastic problems, by providing a generic online learning algorithm with $\sqrt{T \log T}$ regret relative to the best approximation algorithm (under known distributions). Importantly, our online algorithm works in a semi-bandit setting, where in each period, the algorithm only observes samples from the random variables that were actually probed. Our framework applies to several fundamental problems in stochastic optimization such as prophet inequality, Pandora's box, stochastic knapsack, stochastic matchings and stochastic submodular optimization.

315:00 — Power of Adaptivity for Stochastic Submodular Cover

We study the stochastic submodular cover problem, where one needs to select a subset of stochastic items to cover a submodular function at minimum expected cost. Solutions in this setting correspond to sequential decision processes that select items one-by-one adaptively, depending on prior observations. While such adaptive solutions achieve the best objective, they are inherently sequential, which is undesirable in many applications. We show how to obtain solutions that approximate fully-adaptive solutions using only a few “rounds” of adaptivity. We study both independent and correlated settings, proving smooth tradeoffs between the number of adaptive rounds and the solution quality. We also present experimental results demonstrating that a few rounds of adaptivity suffice to obtain high-quality solutions in practice.

415:30 — Recent Advances in Stochastic Function Evaluation: Score Classification and Voting

In stochastic function evaluation, one is given a (succinctly represented) Boolean function and n (typically independent) random variables with known probability distributions. Each of the variables can be queried at known cost, which reveals the value of the respective variable. The goal is to efficiently compute a strategy for querying variables that determines the value of the function applied to these variables at (approximately) minimum expected cost.
In the first part of this talk, we consider score-classification functions (Gkenosis, Grammel, Hellerstein, and Kletenik; 2018). Here, variables are Boolean and the function value only depends on which given subinterval of [0, n] the total number of successes is in. We give simple non-adaptive constant-factor approximation algorithms. This part is joint work with Benedikt Plank.
In the second part of this talk, we consider a setting where the random variables correspond to votes, their values correspond to candidates, and the value of the function is the outcome of an election. We give adaptive constant-factor approximations for the absolute-majority and relative-majority versions of this problem. This part is joint work with Lisa Hellerstein and Naifeng Liu.