114:00 — GloptiNets: Scalable Non-Convex Optimization with Certificates

We present a novel approach to non-convex optimization with certificates, which handles smooth functions on the hypercube or on the torus. Unlike traditional methods that rely on algebraic properties, our algorithm exploits the regularity of the target function intrinsic in the decay of its Fourier spectrum. By defining a tractable family of models, we allow at the same time to obtain precise certificates and to leverage the advanced and powerful computational techniques developed to optimize neural networks. In this way the scalability of our approach is naturally enhanced by parallel computing with GPUs. Our approach, when applied to the case of polynomials of moderate dimensions but with thousands of coefficients, outperforms the state-of-the-art optimization methods with certificates, as the ones based on Lasserre's hierarchy, addressing problems intractable for the competitors.

214:30 — Convex relaxations for Gibbs states

A fundamental question in quantum information is to compute expectation values under the Gibbs state of a given local Hamiltonian. I will present in this talk an algorithm for this problem based on hierarchies of convex relaxations over the positive semidefinite cone and the matrix relative entropy cone. I will also discuss convergence guarantees. Based on joint work with Omar Fawzi and Samuel Scalet (arXiv:2311.18706).

315:00 — Moment-SOS relaxations for variational problems

Moment-SOS relaxations are an established technique to compute converging sequences of lower bounds on the global minimum of finite-dimensional polynomial optimization problems. In this talk, I will present two recent extension of moment-SOS relaxations to infinite-dimensional variational problems, where an integral functional is to be minimized over functions from a Sobolev space. The first extension optimizes so-called "null Lagrangian translations" and returns certified lower bounds on the global minimum of the variational problem. The second extension, instead, produces upper bounds by approximating minimizers of finite element discretizations of the variational problem. Conditions that ensure these upper and lower bounds converge to the desired global minimum will be discussed, and current gaps between theory and practice will be illustrated by means of examples.

415:30 — Nonconvergence of some sum-of-squares bounds for global polynomial optimization

Consider the problem of computing the minimum of a polynomial f on a closed set X. Given a measure mu supported on X, Lasserre proposes a decreasing sequence of upper bounds on this minimum, each of which may be computed by solving a semidefinite program. When X is compact, these bounds converge to the minimum under minor assumptions on mu. Later, Lasserre introduces a related, but far more economical sequence of upper bounds which rely on the push-forward measure of mu by f. While these new bounds are weaker a priori, they actually achieve similar asymptotic convergence rates on compact sets.
We show that no such free lunch exists in the non-compact setting. While convergence for the standard bounds is guaranteed when $X = R^n$ and mu is the Gaussian distribution, we prove that the bounds relying on the push-forward measure fail to converge in that setting already for polynomials of degree 6.