114:00 — Forward-Backward Extended DMD with an Asymptotic Stability Constraint

This presentation introduces a data-driven method to identify an asymptotically stable Koopman system from noisy data. The Koopman operator allows a nonlinear system to be modeled in a linear fashion from data. However, systems identified with noisy data are modeled as biased and potentially unstable systems. The presented approach poses the forward- and backward-in-time Koopman operator approximation problem as a convex optimization problem with linear matrix inequality (LMI) constraints to enforce asymptotic stability and reduce the bias in the identified Koopman system. The presented method is validated against state-of-the-art methods using a simulated Duffing oscillator dataset and an experimental soft robot arm dataset.

214:30 — Convex conic reformulation of geometric nonconvex conic optimization problems and exact solutions of QCQPs by SDP relaxations

A geometric nonconvex conic optimization problem was recently proposed as a unified framework for convex conic reformulation of a class of quadratic optimization problems and polynomial optimization problems. The nonconvex conic optimization problem minimizes a linear function over the intersection of a nonconvex cone K, a convex subcone J of the convex hull of K, coK, and an affine hyperplane with a normal vector. Under the assumption co(K cap J) = J, the original nonconvex conic optimization problem was shown to be equivalently formulated as a convex conic program by replacing the constraint set with the intersection of J and the affine hyperplane. In this talk, we further investigate some remaining issues such as the key assumption co(K cap J) = J in the framework and provide three sets of necessary-sufficient conditions for the assumption. As an application, we propose a new wide class of quadratically constrained quadratic programs with multiple nonconvex equality and inequality constraints that can be solved exactly by their semidefinite programming (SDP) relaxation. The proposed class of quadratically constrained quadratic programs can have any finite number of noncovex constraints, extending the previous results on the exact SDP relaxations. The condition necessary for the exactness is quite general.

315:00 — On the tightness of SDP relaxation for quadratic assignment problem

Quadratic assignment is a fundamental problem in combinatorial optimization and finds numerous applications in operation research, computer vision, etc. However, it is in general NP-hard problem to find the global minimizer to the quadratic assignment problem (QAP). In this work, we study the semidefinite relaxation (SDR) of the QAP and investigate when the SDR recovers the global minimizer. In particular, we consider the two input matrices satisfy certain signal-plus-noise model, and show that when the noise is sufficiently smaller than the signal, then the SDR is tight. It is worth noting that this sufficient condition is purely algebraic and does not depend on any statistical assumption of the input data.

415:30 — High-rank Solution of Sum-of-Squares Relaxations for Exact Matrix Completion

We study rank-one matrix completion problems which estimate all the elements of a matrix from a partially given matrix. Although these problems can be formulated as quadratically constrained quadratic problems, their standard semidefinite relaxations generally are not tight. Recently, the Lasserre’s relaxation of order 2 have been shown to output the exact completion when the completion is unique and the objective function is the trace of the moment matrix. In this talk, we show that when a certain matrix is used as the coefficients of the objective function instead of the trace, the exact completion can also be obtained from the dual of the Lasserre’s relaxation, called the sum of squares (SOS) relaxation. This theoretical result is based on the existence of an optimal solution such that the Gram matrix in the SOS relaxation is of high-rank due to the sparsity structure of the coefficient matrix. We also propose an algorithm for finding this coefficient matrix and illustrate its performance with computer experiments. Furthermore, additional results including the polynomial optimization problem with a hidden matrix completion problems are discussed.