114:00 — ** CANCELLED ** Solving Exact and Noisy Rank-one Tensor Completion with Semidefinite Programming

Consider the problem of retrieving a d-th order rank-one tensor given (exact or noisy) values of some of its entries. We address this problem via semidefinite programming (SDP). Concretely, we propose SDP relaxations and we study deterministic conditions under which the SDPs solve the completion problems exactly. We show that our SDPs solve the exact and noisy rank-one tensor completion problems when the observations satisfy a combinatorial condition, which we call square-propagation. We prove that the square-propagation condition is satisfied with high probability when the number of uniformly random observations is greater than or equal to some quantity. Moreover, when d equals 2 (matrix completion), the square-propagation is satisfied precisely when the completion problem admits a unique solution. Preliminary computational experiments show that our methods allow to solve tensor completion problems using significantly less observations than alternative methods.

214:30 — Random Projections for Semidefinite Programming and Polynomial Optimization

Random projections, a dimensionality reduction technique, have been found useful in recent years for reducing the size of optimization problems. In this presentation, we explore using random projections to approximate semidefinite programming (SDP) problems by reducing the size of matrix variables, thereby solving the original problem with less computational effort. We provide some theoretical guarantees on the quality of the projection. We also investigate the performance of the approach on semidefinite relaxations appearing in polynomial optimization, with a focus on combinatorial optimization problems.

315:00 — HALLaR: A New Solver for Solving Huge SDPs

This talk discusses the computational aspects of a new first-order method for solving large-scale semidefinite programs (SDPs) with bounded domain. It is an inexact augmented Lagrangian (AL) method where the AL subproblems are solved by a hybrid low-rank method. In contrast to the classical low-rank method, the new method finds a near-optimal solution (with provable complexity bounds) of SDP instances satisfying strong duality. Computational results comparing the new method to state-of-the-art solvers on several large SDP instances show that the former finds higher accurate solutions in substantially less CPU time than the latter ones. For example, in less than 20 minutes, our method can solve a maximum stable set SDP with 1 million vertices and 10 million edges within 1e-5 relative accuracy.

415:30 — Sum of squares submodularity

Submodularity is a key property of set-valued functions, appearing in many different areas such as game theory, machine learning, and optimization. In this talk, we consider set-valued functions over $\{0,1\}^n$ with a multilinear extension of degree less than or equal to some k. For these functions, it is known that testing whether they are submodular is hard for k greater or equal to 4. We thus introduce a sufficient condition for submodularity based on sum of squares polynomials, which we refer to as sum of squares (sos) submodularity. We investigate the gap between submodularity and sos submodularity and show how to use semidefinite programming to optimize over this set. In particular, we use the techniques proposed to learn a submodular function from data, among other applications.