108:30 — Accelerating nuclear-norm regularized low-rank matrix optimization through Burer-Monteiro decomposition

This work proposes a rapid algorithm, BM-Global, for nuclear-norm-regularized convex and low-rank matrix optimization problems. BM-Global efficiently decreases the objective value via low-cost steps leveraging the nonconvex but smooth Burer-Monteiro (BM) decomposition, while effectively escapes saddle points and spurious local minima ubiquitous in the BM form to obtain guarantees of fast convergence rates to the global optima of the original nuclear-norm-regularized problem through aperiodic inexact proximal gradient steps on it. The proposed approach adaptively adjusts the rank for the BM decomposition and can provably identify an optimal rank for the BM decomposition problem automatically in the course of optimization through tools of manifold identification. BM-Global hence also spends significantly less time on parameter tuning than existing matrix-factorization methods, which require an exhaustive search for finding this optimal rank. Extensive experiments on real-world large-scale problems of recommendation systems, regularized kernel estimation, and molecular conformation confirm that BM-Global can indeed effectively escapes spurious local minima at which existing BM approaches are stuck, and is a magnitude faster than state-of-the-art algorithms for low-rank matrix optimization problems involving a nuclear-norm regularizer.

209:00 — Well-conditioned primal-dual interior-point method for accurate low-rank semidefinite programming

We describe how the low-rank structure in an SDP can be exploited to reduce the per-iteration cost of a convex primal-dual interior-point method down to cubic time and quadratic memory, even at very high accuracies. A traditional difficulty is the dense Newton subproblem at each iteration, which becomes progressively ill-conditioned as progress is made towards the solution. Preconditioners have previously been proposed to improve conditioning, but these can be expensive to set up, and become ineffective as the preconditioner itself becomes increasingly ill-conditioned at high accuracies. Instead, we present a well-conditioned reformulation of the Newton subproblem that is cheap to set up, and whose condition number is guaranteed to remain bounded over all iterations. In theory, applying an inner iterative method to the reformulation reduces the per-iteration cost of the outer interior-point method to cubic time and quadratic memory. We also present a well-conditioned preconditioner that greatly improves the convergence of the inner iterations.

309:30 — Loraine news: From a low-rank to a high-precision SDP solver

Loraine.jl is a Julia SDP solver, originally developed for large-scale problems with low-rank solutions. We will present a second generation of the code, focused on high-accuracy solution of small but difficult SDP problems. The code uses extended-precision arithmetic provided by MultiFloats.jl. We will demonstrate that the solver can deliver reliable solutions to difficult (highly ill-conditioned or almost infeasible) problems arising in Steady-State Power Network Optimization or in global polynomial optimization. In particular, we will give numerical confirmation of theoretical convergence estimates of Lasserre hierarchy for certain polynomial optimization problems studied in the literature. For instance, we will show that Loraine can give a reliable solution to problems with relaxation order 35 and more, much higher than standard SDP solvers.

410:00 — ** CANCELLED ** An Augmented Lagrangian Primal-Dual Semismooth Newton Method for Multi-Block Composite Optimization

In this talk, we present a novel primal-dual semismooth Newton method for solving linearly constrained multi-block convex composite optimization problems. First, a differentiable augmented Lagrangian (AL) function is constructed by utilizing the Moreau envelopes of the nonsmooth functions. Then we derive a semismooth system of nonlinear equations and develop a semismooth Newton method ALPDSN. Through a connection to the inexact first-order steps when the regularization parameter is sufficiently large, the global convergence of ALPDSN is established. Under the regularity conditions, partial smoothness, the local error bound, and the strict complementarity, we show that both the primal and the dual iteration sequences possess a superlinear convergence rate. Numerical results demonstrate the high efficiency and robustness of our ALPDSN.