1 — 08:30 — Lifting nonnegativity over general semialgebraic sets to nonnegativity over simple sets
In this talk, we propose a universal approach to derive new Positivstellensatze for general semialgebraic sets from ones developed for simpler sets, such as a box, or a simplex. We illustrate the approach by first considering Handelman’s Positivstellensatz over a box, to construct non-SOS Positivstellensatze over any compact semialgebraic set; that is, Positivstellensatze where instead of SOS polynomials, any class of polynomials containing the nonnegative constants, such as SONC, DSOS/SDSOS polynomials can be used to certify nonnegativity. Secondly, by considering the simplex as the simple set, we derive a sparse Positivstellensatz over general compact semialgebraic sets, which does not rely on any structural assumptions of the set. Finally, we show how these results can be extended to the case of unbounded semialgebraic sets. Throughout the talk, we illustrate our results with relevant examples and numerical experiments.
2 — 09:00 — A low-rank method for large-scale semidefinite programs
This talk discusses the theory of a new first-order method for solving large-scale semidefinite programs (SDPs) with bounded domain. It is an inexact augmented Lagrangian (AL) method where the AL subproblems are solved by a hybrid low-rank method. In contrast to the classical low-rank method, the new method finds a near-optimal solution (with provable complexity bounds) of SDP instances satisfying strong duality. Computational results comparing the new method to state-of-the-art solvers on several large SDP instances show that the former finds higher accurate solutions in substantially less CPU time than the latter. For example, it can solve a maximum stable set SDP with a million vertices and 10 million edges with $10^{-10}$ accuracy in less than 21 minutes
3 — 09:30 — Rectified relaxation: A trustworthy and interpretable deep learning approach to linear optimization with complementary constraints
In this talk, we introduce a new deep learning framework, called rectified relaxation for linear optimization with complementary constraints (LOLCC). For this, we first propose to solve LOLCC via solving a series of feasibility problems for systems defined by linear and complementary constraints (SLCC). Then we introduce the new universal relaxation theory, which shows SLCC can be addressed via solving another equivalent bilinear optimization problem. We explore various properties of the bilinear model to derive different sufficient/necessary conditions at the global optimal solution of the bilinear model itself. We then combine the new theoretical insights with the backward propagation method to design a new learning scheme, called rectified relaxation (ReCR). Global convergence of the ReCR is established under mild conditions and preliminary numerical results will be reported to demonstrate the efficacy of the new DL approach.