108:30 — First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

We focus on constrained, L-smooth, nonconvex-nonconcave min-max
problems either satisfying rho-cohypomonotonicity or admitting a
solution to the rho-weakly Minty Variational Inequality (MVI), where
larger values of the parameter rho>0 correspond to a greater degree of
nonconvexity. Relevant problem classes include two player
reinforcement learning and interaction dominant min-max problems. It
has been conjectured that first-order methods can tolerate values of
rho no larger than 1/L, but results until now have stagnated at the
tighter requirement rho<0.5/L. We obtain optimal or best-known
complexity guarantees with cohypomonotonicity or weak MVI conditions
for rho<1/L, using inexact variants of Halpern and Krasnoselskii-Mann
(KM) iterations. We also provide algorithms and complexity guarantees
in the stochastic case with the same range on rho. Our improvements
come from harnessing the recently proposed "conic nonexpansiveness"
property of operators. Finally, we provide a refined analysis for
inexact Halpern iteration and propose a stochastic KM iteration with a
multilevel Monte Carlo estimator.

209:00 — ** CANCELLED ** Efficient Gradient Tracking Algorithmic Frameworks for Decentralized Optimization

Gradient tracking optimization algorithms have received significant attention over the past years for distributed optimization over networks due to their ability to converge to the solution under a constant step size. At every iteration, these algorithms require a computation step of calculating gradients at each node and a communication step to share the iterate and a gradient estimate among nodes. The complexity of these two steps varies significantly across different applications of distributed optimization. In this talk, we present an algorithmic framework that decomposes these two steps, provides flexibility among these steps at each iteration, and is robust to the stochasticity inherent in these steps. We provide optimal theoretical convergence and complexity guarantees, and illustrate its performance on quadratic and classification problems.

309:30 — Randomized and Stochastic Methods for Fixed-Point and Root-Finding Problems

  • Quoc Tran Dinh, Department of Statistics And Operations Research, The University of North Carolina At Chapel Hill

In this talk, we present some recent randomized and stochastic algorithms to approximate a solution of fixed-point problems, or equivalently, a solution of root-finding problems. Our algorithms rely on new variants of well-known stochastic variance-reduced estimators, but applying to a different operator. We prove the convergence of these algorithms for two classes of problems: co-hypomonotone and co-coercive settings. We show that our algorithms achieve some improvement in terms of oracle complexity compared to deterministic methods. Our algorithms are also verified via concrete numerical experiments and comparison to existing methods.

410:00 — Stochastic Proximal-Gradient Methods with Nonlinear Constraints

We present a stochastic proximal-gradient method for solving regularized optimization problems with nonlinear constraints. The method may also be used when exact (deterministic) function and gradient evaluations are available. A complexity analysis and computational results for the proposed method will be presented that highlight the strengths of our approach.