114:00 — The inexact power augmented Lagrangian method for constrained nonconvex optimization

In this talk, we analyze an inexact augmented Lagrangian method with an unconventional augmenting term for solving a general class of nonconvex problems with nonlinear constraints. It is shown that this method achieves a better iteration complexity in terms of the constraint violation than its classical counterpart. Additionally, we provide a total complexity analysis when using an accelerated first-order method to solve the Hölder-smooth inner problems. This result suggests that the optimal power of the augmenting term depends on the desired tolerance for the suboptimality and constraint violation. We also discuss the convex setting, in which the method corresponds to a higher-order proximal point method, yielding a fast global sublinear rate and a local superlinear rate. Finally, numerical experiments validate the practical merits of our power augmented Lagrangian method on some problems of practical interest.

214:30 — A quasi-Newton method for nonsmooth, nonconvex, stochastic optimization

We consider the minimization of a Lipschitz continuous and expectation-valued function over a closed and convex set. Our focus lies on obtaining both asymptotics as well as rate and complexity guarantees for computing an approximate stationary point (in a Clarke sense) via zeroth-order schemes. We adopt a randomized-smoothing-based approach reliant on minimizing a smooth approximation of our objective function. In such a setting, we develop a zeroth-order stochastic quasi-Newton scheme reliant on a combination of randomized and Moreau smoothing is proposed and analyzed, for which iteration and sample complexities are derived.

315:00 — Zeroth-order federated methods for stochastic MPECs and nondifferentiable nonconvex hierarchical optimization

Motivated by the emergence of federated learning (FL), we design and analyze communication-efficient decentralized computational methods for addressing four broadly applicable but hitherto unstudied problem classes in FL: (i) Nondifferentiable nonconvex optimization; (ii) Bilevel optimization; (iii) Minimax problems; and (iv) Two-stage stochastic mathematical programs with equilibrium constraints (2s-SMPEC). Notably, in an implicit sense, (ii), (iii), and (iv) are instances of (i). However, these hierarchical problems are often complicated by the absence of a closed form expression for the implicit objective function. Research on these problems has been limited and afflicted by reliance on strong assumptions, including the need for differentiability of the implicit function and the absence of constraints in the lower-level problem, among others. We address these shortcomings by making the following contributions. In (i), by leveraging convolution-based smoothing and Clarke’s subdifferential calculus, we devise a randomized smoothing-enabled zeroth-order FL method and derive communication and iteration complexity guarantees for computing an approximate Clarke stationary point. To contend with (ii) and (iii), we devise a unifying randomized implicit zeroth-order FL framework, equipped with explicit communication and iteration complexities. Importantly, our method utilizes delays during local steps to skip calls to the inexact lower-level FL oracle. This results in significant reduction in communication overhead. In (iv), we devise an inexact implicit variant of the method in (i). Remarkably, this method achieves a total communication complexity matching that of single-level nonsmooth nonconvex optimization in FL. This appears to be the first time that 2s-SMPECs are provably addressed in FL. We empirically validate the theoretical findings on instances of federated nonsmooth and hierarchical problems including training of ReLU neural networks, hyperparameter learning, fair classification, and Stackelberg-Nash-Cournot equilibrium seeking problem.

415:30 — Hessian barrier algorithms for non convex conic optimization

A key problem in mathematical imaging, signal processing and computational statistics is the minimization of non-convex objective functions that may be non-differentiable at the relative boundary of the feasible set. This paper proposes a new family of first- and second-order interior-point methods for non-convex optimization problems with linear and conic constraints, combining logarithmically homogeneous barriers with quadratic and cubic regularization respectively. Our approach is based on a potential-reduction mechanism and, under the Lipschitz continuity of the corresponding derivative with respect to the local barrier-induced norm, attains a suitably defined class of approximate first- or second-order KKT points with worst-case iteration complexity $O(\eps^{-2})$ (first-order) and $O(\eps^{-3/2})$ (second-order), respectively. Based on these findings, we develop new path-following schemes attaining the same complexity, modulo adjusting constants. These complexity bounds are known to be optimal in the unconstrained case, and our work shows that they are upper bounds in the case with complicated constraints as well. To the best of our knowledge, this work is the first which achieves these worst-case complexity bounds under such weak conditions for general conic constrained non-convex optimization problems.