116:20 — Numerical design of optimized first-order methods

Large-scale optimization is central to many engineering applications such as machine learning, signal processing, and control. First-order optimization methods are a popular choice for solving such problems due to their performance and low computational cost. However, in many applications, such as machine learning, their performance is highly dependent on the choice of step sizes. Tuning the step sizes of first-order methods is a notoriously hard nonconvex problem. In this talk, we present several approaches to optimizing the step sizes of first-order optimization methods, relying on recent developments in Performance Estimation Methods. We compare our results to existing optimized step sizes and provide new results, namely numerically optimized steps for cyclic coordinate descent and inexact gradient descent, leading to accelerated convergence rates with and without momentum.

216:50 — Analysis of Optimization Methods via non-convex Performance Estimation

The “Performance Estimation Problem framework” computes numerically the exact worst-case performance of a given optimization method on a given function class. It allowed to determine the convergence rate of numerous first-order methods and also to design optimal methods. The convex formulation of the Performance Estimation Problems can be efficiently solved but it restricts the analysis to certain types of methods. Recently, some works have proposed to tackle a non-convex formulation of Performance Estimation Problems allowing more flexibility in the analysis but at a computational cost. In this work, we propose to exploit the non-convex Performance Estimation Problem to analyze new optimization schemes that were out of the scope of the classical convex Performance Estimation Problem framework like zeroth, second-order, and adaptive methods.

317:20 — Stochastic ISTA/FISTA Adaptive Step Search Algorithms for Convex Composite Optimization

We develop and analyze stochastic variants of ISTA and a full backtracking FISTA algorithms [Beck and Teboulle, 2009, Scheinberg et al., 2014] for composite optimization without the assumption that stochastic gradient is an unbiased estimator. This work extends analysis of inexact fixed step ISTA/FISTA in [Schmidt et al., 2011] to the case of stochastic gradient estimates and adaptive step-size parameter chosen by backtracking. It also extends the framework for analyzing stochastic line-search method in [Cartis and Scheinberg, 2018] to the proximal gradient framework as well as to the accelerated first order methods.