114:00 — ** CANCELLED ** A projected semismooth Newton method for a class of nonconvex composite programs with strong prox-regularity

This paper aims to develop a Newton-type method to solve a class of nonconvex composite programs. In particular, the nonsmooth part is possibly nonconvex. To tackle the nonconvexity, we develop a notion of strong prox-regularity which is related to the singleton property and Lipschitz continuity of the associated proximal operator, and we verify it in various classes of functions, including weakly convex functions, indicator functions of proximally smooth sets, and two specific sphere-related nonconvex nonsmooth functions. In this case, the problem class we are concerned with covers smooth optimization problems on manifold and certain composite optimization problems on manifold. For the latter, the proposed algorithm is the first second-order type method. Combining with the semismoothness of the proximal operator, we design a projected semismooth Newton method to find a root of the natural residual induced by the proximal gradient method. Due to
the possible nonconvexity of the feasible domain, an extra projection is added to the usual semismooth Newton step and new criteria are proposed for the switching between the projected semismooth Newton step and the proximal step. The global convergence is then established under the strong prox-regularity. Based on the BD regularity condition, we establish local superlinear convergence. Numerical experiments demonstrate the effectiveness of our proposed method compared with state-of-the-art ones.

214:30 — Accelerated gradient descent: A guaranteed bound for a heuristic restart strategy

The $O(1/k^2)$ convergence rate in function value of accelerated gradient descent is optimal, but there are many modifications that have been used to speed up convergence in practice. Among these modifications are restarts, that is, starting the algorithm with the current iteration being considered as the initial point. We focus on the adaptive restart techniques introduced by O'Donoghue and Candes, specifically their gradient restart strategy. While the gradient restart strategy is a heuristic in general,
we prove that applying gradient restarts preserves and in fact improves the $O(1/k^2)$ bound, hence establishing function value convergence, for one-dimensional functions.Applications of our results to separable and nearly separable functions are presented.

315:00 — Forward-Backward algorithms for weakly convex problems

  • Ewa Bednarczuk, Systems Research Institute, Polish Academy Of Sciences

We investigate convergence properties of exact and inexact forward-backward algorithms for the minimisation of the sum of weakly convex functions defined on a Hilbert space, where one has a Lipschitz-continuous gradient. We show that the exact forward-backward algorithm converges strongly to a global solution, provided that the objective function satisfies a sharpness condition.

For the inexact forward-backward algorithm, the same assumptions ensure that the distance from the iterates to the solution set approaches a positive threshold depending on the accuracy level of the proximal computations.

As an application of the considered setting, we provide numerical experiments related to feasibility problems, source localisation and discrete tomography. The talk is based on a joint work with Giovanni Bruccola, Gabriele Scrivanti and The Hung Tran

415:30 — ** CANCELLED ** Zeroth order and first order primal dual alternating proximal gradient algorithms for nonsmooth nonconvex minimax problems with coupled linear constraints

  • Zi Xu, Shanghai University

Nonconvex minimax problems have attracted wide attention in machine
learning, signal processing and many other fields in recent years. In this talk, we propose a zeroth- order and a first order primal dual alternating proximal gradient (PDAPG) algorithm and a primal dual proximal gradient (PDPG-L) algorithm for solving nonsmooth nonconvex-(strongly) concave and nonconvex-linear minimax problems with coupled linear constraints, respectively. The iteration complexity of the two algorithms also have been proved to reach an $\epsilon$-stationary point. To our knowledge, they are the first two algorithms with iteration complexity guarantee for solving the nonconvex minimax problems with coupled linear constraints.