114:00 — Single-loop stochastic variance reduced method for variational inequalities

In this talk, we consider the problem of variational inequalities (VIs). Within the class of monotone VIs with a finite-sum structure, we propose a novel single-loop stochastic variance-reduced algorithm equipped with the Bregman distance function. We demonstrate an optimal convergence guarantee for the proposed method. Furthermore, we investigate a structured class of non-monotone problems exhibiting weak Minty solutions and discuss the convergence rate of our proposed method. Numerical experiments will be presented to showcase the performance of the proposed algorithm compared with state-of-the-art methods.

214:30 — ** CANCELLED ** Tikhonov regularized exterior penalty dynamics for constrained variational inequalities

Solving equilibrium problems under constraints is an important problem in optimization and optimal control. In this context an important practical challenge is the efficient incorporation of constraints. We develop a continuous-time method for solving constrained variational inequalities based on a new penalty regulated dynamical system in a general potentially infinite-dimensional Hilbert space. In order to obtain strong convergence of the issued trajectory of our method, we incorporate an explicit Tikhonov regularization parameter in our method, leading to a class of time-varying monotone inclusion problems featuring multiscale aspects. Besides strong convergence, we illustrate the practical efficiency of our devel- oped method in solving constrained min-max problems.

315:00 — Synchronous and asynchronous gradient-response schemes for computing quasi-Nash equilibria under uncertainty

In this work, we propose synchronous and asynchronous gradient-response schemes to solve stochastic nonconvex games. Our theoretical results show that under some reasonably mild conditions, both schemes generate sequences which converge to quasi-Nash equilibria almost surely. We further explore some special cases for which we can obtain rate statements. Both the convergence and rate claims appear to be novel in the context of computing quasi-Nash equilibria. Numerical experiments are also provided. To our best knowledge, this represents the first the first work on gradient-response schemes for stochastic nonconvex games.

415:30 — Stochastic Halpern iteration in non-Euclidean spaces and applications to synchronous $Q$-learning

We analyze the overall oracle complexity of the stochastic Halpern iteration with minibatch, aiming to approximate fixed points of nonexpansive and contractive operators in a finite-dimensional space with a non-Euclidean norm. We outline conditions for almost sure convergence of the iterates and demonstrate that, under mild assumptions such as uniformly bounded variance, the expected fixed-point residual decays at a rate of $\tilde{O}(1/n)$. When approximating a fixed point within a tolerance $\varepsilon>0$, our method exhibits an overall oracle complexity of $\tilde{O}(\varepsilon^{-5})$, surpassing the $\tilde{O}(\varepsilon^{-6})$ rate of the stochastic Krasnosel'skii-Mann iteration. Furthermore, we establish a lower bound of $\tilde{O}(\varepsilon^{-3})$, which applies to a wide range of algorithms, including all averaged iterations like Krasnosel'skii-Mann and Halpern. As an application, we propose new algorithms for synchronous Q-learning with average and discounted rewards. In particular, for average reward, our method improves upon the best-known sample complexity in the literature.