1 — 14:00 — Reformulation-Perspectification Technique for Global Optimization
We propose a new global optimization approach for solving nonconvex optimization problems in which the nonconvex components are sums of products of convex functions. A broad class of nonconvex problems can be written in this way, such as concave minimization problems, difference of convex problems, and fractional optimization problems. Our approach exploits two techniques: first, we introduce a new technique, called the Reformulation-Perspectification Technique (RPT), to obtain a convex approximation of the considered nonconvex continuous optimization problem. Next, we develop a spatial Branch and Bound scheme, leveraging RPT, to obtain a global optimal solution. Numerical experiments on four different convex maximization problems, a quadratic constrained quadratic optimization problem, and a dike height optimization problem demonstrate the effectiveness of the proposed approach. In particular, our approach solves more instances to global optimality for the considered problems than BARON and SCIP. Moreover, for problem instances of larger dimension our approach outperforms both BARON and SCIP on computation time for most problem instances, while for smaller dimension BARON overall performs better on computation time.
2 — 14:30 — A Slightly Lifted Convex Relaxation for Nonconvex Quadratic Programming with Ball Constraints
Globally optimizing a nonconvex quadratic over the intersection of m
balls in n-dimensional space is known to be polynomial-time solvable
for fixed m. Moreover, when m=1, the standard semidefinite relaxation
is exact. When m=2, it has been shown recently that an exact relaxation
can be constructed using a disjunctive semidefinite formulation based
essentially on two copies of the m=1 case. However, there is no known
explicit, tractable, exact convex representation for m bigger than 2.
In this paper, we construct a new, polynomially sized semidefinite
relaxation for all m, which does not employ a disjunctive approach.
We show that our relaxation is exact for m=2. Then, for larger m, we
demonstrate empirically that it is fast and strong compared to existing
relaxations. The key idea of the relaxation is a simple lifting of
the original problem into dimension n+1. Extending this construction:
(i) we show that nonconvex quadratic programming over a ball and a
second-order cone, which enforces a linear bound on the norm of the
n-dimensional variable, has an exact semidefinite representation; and
(ii) we construct a new relaxation for quadratic programming over the
intersection of two ellipsoids, which globally solves all instances of a
benchmark collection from the literature.
3 — 15:00 — ** Moved to Friday at 14:00 - FB821** Semidefinite representable reformulations for two variants of the trust-region subproblem
Motivated by encouraging numerical results in Eltved and Burer (2023), we consider two specific variants of the trust-region subproblem and provide exact semidefinite representable reformulations. The first is over the intersection of two balls; the second is over the intersection of a ball and a special second-order conic representable set. The reformulations are based on partitions of the feasible regions into sub-regions with known lifted convex hulls.
4 — 15:30 — Hidden convexity, optimization and algorithms on rotation matrices
This talk studies hidden convexity properties associated with constrained optimization problems over the set of rotation matrices SO(n). Such problems are nonconvex due to the constraint X in SO(n). Nonetheless, we show that certain linear images of SO(n) are convex, opening up the possibility for convex optimization algorithms with provable guarantees for these problems. Our main technical contributions show that any two-dimensional image of SO(n) is convex and that the projection of SO(n) onto its strict upper triangular entries is convex. These results allow us to construct exact convex reformulations for constrained optimization problems over SO(n) with a single constraint or with constraints defined by low-rank matrices. Both of these results are optimal in a formal sense.