1 — 08:30 — On projections onto Parametric Polyhedral Sets
In this presentation we are investigating Lipschitz continuity of the projection of a given point onto a moving polyhedron (parametric polyhedral set), in the case when the parameter is included in the functions of the left and right-hand sides of a linear system of inequalities. The problem can be cast equivalently as a parametric variational inequality and related to the properties of the graphical subdifferential mapping. Basing on that result, the sufficient conditions for lipschitzness of projection onto a moving polyhedron will be discussed. We will show that, whenever an equivalent representation of a moving polyhedron, called stable, exists, then the selected regularity conditions are enough to ensure local lipschitzness of projection onto a moving polyhedron.
2 — 09:00 — Douglas-Rachford splitting algorithm with line search
Douglas-Rachford (DR) splitting algorithm is a classical splitting algorithm for solving the zero point problem of two maximal monotone operators. The selection of step size is of great significance for the convergence speed of the algorithm and proper line search criteria can effectively improve the running speed of the algorithm.
we proposed a DR splitting algorithm with line search, and proves the global convergence of the algorithm sequence and the sub-linear convergence rate of the function value corresponding to the sequence generated by the algorithm in the ergodic sense.
3 — 09:30 — Primal-Dual Methods with Adjoint Mismatch - Existence of Stationary Points, Convergence and Error Bounds
The Chambolle-Pock method and the primal-dual Douglas-Rachford method are well-known standard methods to solve saddle-point problems of the form $$\min_x \max_y G(x) + \langle Ax, y \rangle - F^*(y).$$ However, in practical applications like computed tomography, the adjoint operator is often replaced by a computationally more efficient approximation. This leads to an adjoint mismatch in the algorithms, which translates to a replacement of $A^*$ with some linear operator $V^*$ in the algorithms and to non-convergence in practical applications.
In this talk, we analyze the convergence properties of the Chambolle-Pock method as well as the primal-dual Douglas-Rachford method under the presence of an adjoint mismatch and prove conditions under which the existence of a solution can still be guaranteed. We explain the difficulties in finding appropriate step sizes and present methods to calculate them. Furthermore, we discuss how the adjoint mismatch affects the implementation of the algorithm. Additionally, we show that we have to solve a variational inclusion instead of the former saddle-point problem, but we are able to present error bounds on the primal solution for both algorithms.
4 — 10:00 — Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution Differential Equations
Various first-order methods used for solving Variational Inequalities (VI) produce the same continuous-time ordinary differential equation (ODE) when derived simply. However, these methods have different convergence behaviors, even in basic bilinear games. Consequently, while the ODE approach has been influential in analyzing methods minimizing real-valued functions, it has not had a comparable impact in VI optimization.
We utilize a framework from fluid dynamics, called High-Resolution Differential Equations (HRDEs), to formulate ODE models for VI optimization methods. Importantly, these HRDEs differ across methods. Furthermore, in bilinear games, the convergence properties of the HRDEs align with the characteristics of the corresponding discrete methods. We show that the HRDE for Optimistic Gradient Descent Ascent (OGDA) has last-iterate convergence for general monotone variational inequalities. Lastly, we establish convergence rates for the best-iterate convergence of OGDA, based solely on the first-order smoothness of the monotone operator.