114:00 — Level-set geometry and the complexity of restarted PDHG for conic LP

The restarted primal-dual hybrid gradient method (rPDHG) -- with heuristic enhancements and GPU implementation -- has been very successful in solving huge-scale LP problems. Here we analyze the theoretical and practical performance of rPDHG for general (convex) conic optimization. We show a relationship between the geometry of the primal-dual sublevel sets $W_{eps}$ and the convergence rate of rPDHG. In particular, we show that the convergence rate of rPDHG is faster when there is a primal-dual sublevel set $W_{eps}$ for which (i) $W_{eps}$ is close (in Hausdorff distance) to the optimal solution set, and (ii) the ratio of the diameter to the “conic radius” of $W_{eps}$ is small. Based on these geometric condition numbers, we will present our most recent results on the complexity analysis and practical enhancements of rPDHG, such as explaining the easiness and hardness of certain important applications and improving the practical performance of rPDHG using scaling transformations.

214:30 — A Practical and Optimal First-Order Method for Large-Scale Convex Quadratic Programming

Convex quadratic programming (QP) is an important class of optimization problem with wide applications in practice. The classic QP solvers are based on either simplex or barrier method, both of which suffer from the scalability issue because their computational bottleneck is solving linear equations. In this paper, we design and analyze a first-order method called the restarted accelerated primal-dual hybrid gradient method for QP, whose computational bottleneck is matrix-vector multiplication. We show that the proposed algorithm has a linear convergence rate when solving generic QP, and the obtained linear rate is optimal among a wide class of primal-dual methods. Furthermore, we connect the linear rate with a sharpness
constant of the KKT system of QP, which is a standard quantity to measure the hardness of a continuous optimization problem. Numerical experiments on a standard QP benchmark set showcase the advantage of the proposed algorithm compared to its first-order counterparts.

315:00 — A feasible method for solving general convex low-rank SDP problems

Riemannian optimization method is efficient for solving special SDP problems with block-diagonal constraints such as max-cut and orthogonal synchronization. However, for a general SDP problem, after low-rank decomposition, there might be inequality constraints and its feasible set may contain singular points where the LICQ doesn't hold, both of which imply that its feasible set is not a manifold. In this talk, we are going to discuss how to deal with these issues so that we can apply Riemannian optimization method to solve a general SDP problem with provable convergence. Apart from this, we will also discuss how to make use of the sparsity of the constraint matrices to reduce the computational cost in computing the projection and retraction mapping. Numerical experiments will be displayed to compare our algorithm with some state of the art SDP solvers.

415:30 — Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient

We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP improves the state of the art for first-order methods applied to LP.