108:30 — Exotic error bounds, Karamata theory and convergence rate analysis

In this talk we overview recent results on convergence rates of algorithms for convex feasibility problems. Our main tools will be the framework of consistent error bound functions and Karamata theory. Karamata theory is a classical but relatively unknown tool inside the optimization community and it is quite helpful to analyze asymptotic behavior of functions. With that, we show new concrete convergence rate results for problems satisfying exotic non-Holderian error bounds such as logarithmic and entropic error bounds. These exotic error bounds may appear, for instance, in problems involving exponential cones and other non-semialgebraic objects. Time allowing, we will discuss our recent efforts in extending our techniques to common fixed point problems.

209:00 — DEFBAL -- a connection between the ADMM and forward-backward methods

This talk will demonstrate that the two minimization steps of the standard ADMM algorithm are equivalent to applying the standard forward-backward / proximal gradient method to a dual formulation of the augmented Lagrangian subproblem (this dual formulation always has the Lipschitz properties usually assumed in showing convergence of forward-backward methods). This insight allows the construction of a large family of ADMM-like methods by combining a range of proximal-gradient-like methods -- for example, those involving Nesterov-class acceleration -- with a variety of inexact augmented Lagrangian algorithms. We call this class of procedures "DEFBAL", standing for "Dual Embedded Forward-Backward Augmented Lagrangian". We give some possible applications and preliminary computational results, although any computational advantages are unclear at present.

309:30 — The Douglas–Rachford algorithm for inconsistent problems

The Douglas–Rachford algorithm has been successfully employed to solve convex optimization problems, or more generally find zeros of monotone inclusions. Recently, the behaviour of these methods in the inconsistent case, i.e., in the absence of solutions, has been a topic of interest as these problems appear regularly but can be difficult to analyze. We give a result for the strong convergence of the shadow sequence of the Douglas–Rachford algorithm in the possibly inconsistent case when one of the operators is uniformly monotone and 3* monotone but not necessarily a subdifferential. We also provide numerical experiments comparing the performance of the Douglas–Rachford algorithm for inconsistent finite- and infinite-dimensional linear-quadratic problems with the Peaceman–Rachford algorithm and Dykstra's projection algorithm.

410:00 — The Chambolle-Pock algorithm revisited: splitting operator and its range with applications

Primal-dual hybrid gradient (PDHG) is a first-order method for saddle-point problems and convex programming introduced by Chambolle and Pock. Recently, Applegate et al. analyzed the behavior of PDHG when applied to an infeasible or unbounded instance of linear programming, and in particular, showed that PDHG is able to diagnose these conditions. Their analysis hinges on the notion of the infimal displacement vector in the closure of the range of the displacement mapping of the splitting operator that encodes the PDHG algorithm. In this talk, we develop a novel formula for this range using monotone operator theory. The analysis is then specialized to conic programming and further to quadratic programming (QP) and second-order cone programming (SOCP). A consequence of our analysis is that PDHG is able to diagnose infeasible or unbounded instances of QP and of the ellipsoid-separation problem, a subclass of SOCP.