1 — 08:30 — Complexity Guarantees for Optimal Nash Equilibrium Seeking and Bilevel Variational Inequalities
In noncooperative game theory, understanding the quality of Nash equilibrium has been a storied area of research and has led to the emergence of popular metrics including the Price of Anarchy and the Price of Stability. The evaluation of these ratios is complicated by the need to compute the worst and the best equilibrium, respectively. In this talk, we present a class of iteratively regularized extragradient methods with performance guarantees for computing the optimal equilibrium. To this end, we consider optimization problems with merely monotone variational inequality (VI) constraints when the objective function is either (i) merely convex, (ii) strongly convex, or (iii) nonconvex. In (i), we considerably improve the existing complexity guarantees. In (ii) and (iii), we derive new complexity guarantees for solving this class of problems. Notably, this appears to be the first time that nonconvex optimization with monotone VI constraints is addressed with complexity guarantees. We further show that our results in (i) and (ii) can be generalized to address a class of bilevel VIs that subsumes challenging problem classes such as the generalized Nash equilibrium problem. We will then discuss extensions to stochastic and large-scale settings. We validate the theoretical findings using preliminary numerical experiments for computing the best and the worst Nash equilibria.
2 — 09:00 — Global Resolution of Chance-Constrained Optimization Problems: Minkowski Functionals and Monotone Inclusions
Chance-constrained optimization problems, an important subclass of stochastic optimization problems, are often complicated by nonsmoothness, and nonconvexity. Thus far, non-asymptotic rates and complexity guarantees for computing an approximate global minimizer remain open questions. We consider a subclass of problems in which the probability is defined as $P\{\zeta | \zeta \in K(x)\}$, where $K$ is a set defined as $K(x) = \{\zeta \in K | c(x, \zeta ) \leq 1\}$, $c(x, .)$ is a positively homogenous function for any $x \in X$ , and $K$ is a nonempty and convex set, symmetric about the origin. We make two contributions in this context. (i) First, when $\zeta$ admits a log-concave density on $K$, the probability function is equivalent to an expectation of a nonsmooth Clarke-regular integrand, allowing for the chance-constrained problem to be restated as a convex program. Under a suitable regularity condition, the necessary and sufficient conditions of this problem are given by a monotone inclusion with a compositional expectation-valued operator. (ii) Second, under differing requirements on $\zeta$, we present a variance- reduced proximal scheme and provide amongst the first rate and complexity guarantees for resolving chance-constrained optimization problems. This is joint work with Peixuan Zhang (Penn State), Constantino Lagoa (Penn State), and Ibrahim Bardakci (Bartin University, Turkey).
3 — 09:30 — Breaking the cycle: Deterministic block-iterative analysis for the Frank-Wolfe algorithm
To-date, Block-coordinate Frank-Wolfe (BCFW) algorithms are proven to converge under restrictive assumptions on how the block-coordinates are updated, namely, (A) single-coordinate updates (via stochastic and greedy selection), and (B) cyclic/full update schemes which require the evaluation of all LMOs (linear minimization oracles) in the Cartesian product for every outer iteration. These requirements are particularly disadvantageous when one LMO is far more computationally expensive than the others (e.g., requiring an eigencomputation or a large LP solve). Drawing inspiration from the literature on prox-based algorithms, we show that convergence for BCFW is guaranteed under a much milder assumption on the updates. This allows one to activate the expensive LMO(s) less frequently, which also leads to concrete gains in algorithmic performance.
4 — 10:00 — Damped Proximal Augmented Lagrangian Method for weakly-Convex Problems with Convex Constraints
In this talk, I will give a damped proximal augmented Lagrangian method (DPALM) for solving problems with a weakly-convex objective and convex linear/nonlinear constraints. Instead of taking a full stepsize, DPALM adopts a damped dual stepsize to ensure the boundedness of dual iterates. DPALM can produce a (near) eps-KKT point within eps$^{-2}$ outer iterations if each DPALM subproblem is solved to a proper accuracy. In addition, I will show overall iteration complexity of DPALM when the objective is either a regularized smooth function or in a regularized compositional form. For the former case, DPALM achieves the complexity of eps$^{-2.5}$ to produce an eps-KKT point by applying an accelerated proximal gradient (APG) method to each DPALM subproblem. For the latter case, the complexity of DPALM is eps$^{-3}$ to produce a near eps-KKT point by using an APG to solve a Moreau-envelope smoothed version of each subproblem. Our outer iteration complexity and the overall complexity either generalize existing best ones from unconstrained or linear-constrained problems to convex-constrained ones, or improve over the best-known results on solving the same-structured problems. Furthermore, numerical experiments on linearly/quadratically constrained non-convex quadratic programs and linear-constrained robust nonlinear least squares are conducted to demonstrate the empirical efficiency of the proposed DPALM over several state-of-the art methods.