1 — 08:30 — Complexity Guarantees for Optimal Nash Equilibrium Seeking and Bilevel Variational Inequalities
In noncooperative game theory, understanding the quality of Nash equilibrium has been a storied area of research and has led to the emergence of popular metrics including the Price of Anarchy and the Price of Stability. The evaluation of these ratios is complicated by the need to compute the worst and the best equilibrium, respectively. In this talk, we present a class of iteratively regularized extragradient methods with performance guarantees for computing the optimal equilibrium. To this end, we consider optimization problems with merely monotone variational inequality (VI) constraints when the objective function is either (i) merely convex, (ii) strongly convex, or (iii) nonconvex. In (i), we considerably improve the existing complexity guarantees. In (ii) and (iii), we derive new complexity guarantees for solving this class of problems. Notably, this appears to be the first time that nonconvex optimization with monotone VI constraints is addressed with complexity guarantees. We further show that our results in (i) and (ii) can be generalized to address a class of bilevel VIs that subsumes challenging problem classes such as the generalized Nash equilibrium problem. We will then discuss extensions to stochastic and large-scale settings. We validate the theoretical findings using preliminary numerical experiments for computing the best and the worst Nash equilibria.
2 — 09:00 — Global Resolution of Chance-Constrained Optimization Problems: Minkowski Functionals and Monotone Inclusions
Chance-constrained optimization problems, an important subclass of stochastic optimization problems, are often complicated by nonsmoothness, and nonconvexity. Thus far, non-asymptotic rates and complexity guarantees for computing an approximate global minimizer remain open questions. We consider a subclass of problems in which the probability is defined as
3 — 09:30 — Breaking the cycle: Deterministic block-iterative analysis for the Frank-Wolfe algorithm
To-date, Block-coordinate Frank-Wolfe (BCFW) algorithms are proven to converge under restrictive assumptions on how the block-coordinates are updated, namely, (A) single-coordinate updates (via stochastic and greedy selection), and (B) cyclic/full update schemes which require the evaluation of all LMOs (linear minimization oracles) in the Cartesian product for every outer iteration. These requirements are particularly disadvantageous when one LMO is far more computationally expensive than the others (e.g., requiring an eigencomputation or a large LP solve). Drawing inspiration from the literature on prox-based algorithms, we show that convergence for BCFW is guaranteed under a much milder assumption on the updates. This allows one to activate the expensive LMO(s) less frequently, which also leads to concrete gains in algorithmic performance.
4 — 10:00 — Damped Proximal Augmented Lagrangian Method for weakly-Convex Problems with Convex Constraints
In this talk, I will give a damped proximal augmented Lagrangian method (DPALM) for solving problems with a weakly-convex objective and convex linear/nonlinear constraints. Instead of taking a full stepsize, DPALM adopts a damped dual stepsize to ensure the boundedness of dual iterates. DPALM can produce a (near) eps-KKT point within eps