114:00 — Closing nonzero duality gaps by perturbation in SDP and its extension

A semidefinite program (SDP) is called singular if primal (P) and dual (D) are weakly feasible or weakly infeasible.
Singular SDPs may have nonzero duality gaps and are hard to solve.
In this talk, we analyze singular SDPs with finite nonzero duality gaps.
A possible approach to solve such SDPs by the interior-point method
is to perturb primal and dual to recover strong feasibility
shifting the positive semidefinite cones of primal and
dual by $\eta I_P$ and $\varepsilon I_D$, respectively, where $I_P$ and $I_D$
are symmetric positive definite matrices and $\eta$ and $\varepsilon$ are positive numbers. Let us denote by $v(\varepsilon,\eta)$ the common optimal value of $v(\varepsilon,\eta)$ (if it exists). It is known that
$v(\varepsilon,\eta)$ is finite and well-defined for
$(0,0)\not=(\varepsilon,\eta)\geq 0$.
In particular, if (P) and (D) admit nonzero duality gap,
$v(0,0)$ is not defined, and its behavior in the neighborhood of $(0,0)$ was
not clear. Recently, the authors showed that $v(\varepsilon,\eta)$ has
a directional limit $v_a(\theta)$ when approaching to $(0,0)$ from a unique
direction where $\theta$ is direction of approach, and $v_a(\theta)$
and the limiting value converges to ``a value between the optimal value of
(P) and (D)''.
In this talk, we show that if the singularity degree of (P) and (D)is one,
$v_a$ is a bijective mapping from the domain $[0,\pi/2]$ to
$[v(D),v(P)]$, where $v(D)$ and $v(P)$ are the optimal values of (D) and (P),
respectively, closing completely the duality gap.
We also discuss possible extensions of some properties of
$v(\varepsilon,\eta)$ explained above to general conic linear programs.

214:30 — Any-dimensional learning

Mappings arising in various application domains are defined for inputs of different sizes. Examples include standard norms defined for vectors and matrices of any size, graph parameters defined for graphs of any size, and physics quantities defined for systems of any number of particles. In contrast, standard supervised learning deployed to learn these mappings operates with training data of a fixed size, and outputs a mapping only defined for inputs of that same size. How can we learn mappings defined for inputs of infinitely-many sizes from a finite dataset? How do we describe such mappings, and how do we parametrize them to enable such learning from data?

In this talk, we tackle these questions for mappings representable as optimal values or solutions of conic programs. Describing such mappings reduces to describing convex sets. These, in turn, are often "freely" described, meaning that their description makes instantiation in every dimension obvious. We show that such free descriptions of convex sets arise from two ingredients: group invariance and the recently-identified phenomenon of representation stability; and embeddings and projections relating different-sized problem instances. We combine these ingredients to obtain parametrized families of infinitely instantiable convex sets. To extend a set learned from data in a fixed dimension to higher ones, we identify compatibility conditions relating sets in different dimensions that are satisfied in a variety of applications, and obtain parametrizations respecting these conditions. Our parametrizations can be obtained computationally.

Finally, we take a statistical learning perspective on the problem and bound the error any estimator incurs when extended to higher dimensions.

315:00 — Polynomial Lower Approximation for Two-Stage Stochastic Optimization

This talk discusses the challenging problem of finding global optimal solutions for two-stage stochastic programs with continuous decision variables and nonconvex recourse functions. We introduce a two-phase approach. The first phase involves the construction of a polynomial lower bound for the recourse function through a linear optimization problem over a nonnegative polynomial cone. Given the complex structure of this cone, we employ semidefinite relaxations with quadratic modules to facilitate our computations. In the second phase, we solve a surrogate first-stage problem by substituting the original recourse function with the polynomial lower approximation obtained in the first phase. Our method is particularly advantageous for two reasons: it not only generates global lower bounds for the nonconvex stochastic program, aiding in the verification of global optimality for prospective solutions like stationary solutions computed from other methods, but it also simplifies the computation of the expected value of the recourse function by using moments of random vectors. This makes our overall algorithm particularly suitable for the case where the random vector follows a continuous distribution or when dealing with a large number of scenarios. Numerical experiments are given to demonstrate the effectiveness of our proposed approach.

415:30 — Cubic interpolation to certify nonnegativity

A number of problems can be framed as mathematical programs with the special constraint that a function is nonnegative. For many years, custom methods were devised for these so-called semi-infinite optimization problems. Recently, sum of squares programming was introduced to allow many of the constraints to be expressible via semidefinite programming. Unfortunately, sum of squares can only be applied directly when the constrained function is a polynomial. Furthermore, the scaling of the resulting semidefinite programs makes this approach impractical for high degree even in low dimension.

We propose a method for semi-infinite optimization problems harnessing the combined power of low degree sum of squares and the ability of cubic splines to approximate smooth functions very well. We replace function nonnegativity constraints by nonnegative constraints of the functions' cubic interpolants. The cubic nonnegativity constraints are second order cone constraints -- even simpler than general semidefinite constraints. A simple local method that relies only on linear interpolation (and results in a linear program) requires finer discretizations (and overall longer computation times) than are needed when using cubic approximation. We present theoretical guarantees for our method, theoretical comparisons to the linear programming competitor, and applications demonstrating the empirical benefits of using cubic interpolation.