108:30 — A stochastic moving ball approximation method for smooth convex constrained minimization

  • Ion Necoara, National University of Science And Technology Politehnica Bucharest

In this talk, we consider constrained optimization problems with convex, smooth objective and constraints. We propose a new stochastic gradient algorithm, called the Stochastic Moving Ball Approximation (SMBA) method, to solve this class of problems, where at each iteration we first take a gradient step for the objective function and then perform a projection step onto one ball approximation of a randomly chosen constraint. The computational simplicity of SMBA, which uses first-order information and considers only one constraint at a time, makes it suitable for large-scale problems with many functional constraints. We provide a convergence analysis for the SMBA algorithm using basic assumptions on the problem, that yields new convergence rates in both optimality and feasibility criteria evaluated at some average point. Our convergence proofs are novel since we need to deal properly with infeasible iterates and with quadratic upper approximations of constraints that may yield empty balls. We derive convergence rates when the objective function is convex or strongly convex. Preliminary numerical experiments on convex quadratically constrained quadratic problems demonstrate the viability and performance of our method when compared to some existing state-of-the-art optimization methods and software.

209:00 — Randomized Algorithms for Variational Inequality Problems

We consider a variational inequality problem arising from a game among multiple agents, where each agent aims to minimize its own cost function subject to its constrained set represented as the intersection of a (possibly infinite) number of convex functional level sets. A direct projection-based approach or Lagrangian-based techniques for such a problem can be computationally expensive if not impossible to implement. To deal with the problem, we consider randomized methods
that avoid the projection step on the whole constraint set by employing random feasibility updates. In particular, we propose and analyze such random methods for solving VIs based on the projection method, Korpelevich method, and Popov method. We establish the almost sure convergence of the methods and, also, provide their convergence rate guarantees. We illustrate the performance of the methods in simulations for two-agent games.

309:30 — ** CANCELLED ** Retrospective Approximation for Stochastic Constrained Problems Using Sequential Quadratic Programming

Sequential Quadratic Programming (SQP) is one of the state-of-the-art algorithms used to solve deterministic constrained nonlinear optimization problems. In recent years, the framework has been extended to solve deterministic equality and inequality constrained problems with stochastic objective functions. In response to the challenges posed by stochasticity, various schemes have been incorporated into SQP algorithms to adapt key parameters, such as step size and merit parameter, from the deterministic setting to the stochastic setting. These include stochastic line search, Lipschitz constant estimation, and Hessian averaging. In our work, we leverage SQP algorithms within the innovative framework of Retrospective Approximation. This framework introduces a novel approach to solving stochastic constrained problems by allowing the SQP algorithm to solve a series of subsampled deterministic subproblems. Each deterministic subproblem is solved not to optimality, but to a specified accuracy with an increasing sample size for the subsampled deterministic problems. This strategic decoupling of stochasticity from the SQP algorithm proves instrumental, enabling the utilization of legacy deterministic solvers. Thus, by decoupling stochasticity, the Retrospective Approximation framework facilitates the integration of legacy deterministic solvers, reducing requirements for hyper-parameter tuning in stochastic settings. We provide theoretical convergence requirements for the increase in the subsampling batch size and required solution accuracy for deterministic subproblems. We also conduct numerical experiments to showcase the utilization of legacy deterministic solvers for stochastic constrained problems.

410:00 — A Double Stepsize Stochastic SQP Method with Complexity Guarantees

Stochastic gradient algorithms play a key role in the training of large-scale machine learning models. These algorithms can be readily extended to problems with simple constraints, such as when projections onto the feasible region can be computed efficiently. Recently, there has been a surge of interest in stochastic gradient-like algorithms for stochastic optimization problems with nonlinear and nonconvex constraints due to modern machine learning applications such as fair machine learning and physics informed neural networks. In this talk, we present a new stochastic Sequential Quadratic Programming (SQP) for deterministically constrained stochastic optimization. Unlike previous work, this algorithm utilizes a different step size for each component of the orthogonal decomposition of the SQP step, which enables faster convergence with respect to the constraint violation and an improved worst-case complexity result. In addition to improving the worst-case complexity, this algorithmic approach also enables significant flexibility with respect to computing step sizes, such as employing a (safeguarded) line search on the constraint violation. Some preliminary numerical experiments will also be presented.