116:20 — ** MOVED TO FRIDAY AM REMOTE SESSION ** A Low-Rank ADMM Splitting Approach for Semidefinite Programming

We introduce a new first-order method for solving general semidefinite programming problems, based on the alternating direction method of multipliers (ADMM) and a matrix-splitting technique. Our algorithm has an advantage over the Burer-Monteiro approach as it only involves much easier quadratically regularized subproblems in each iteration. For a linear objective, the subproblems are well-conditioned quadratic programs that can be efficiently solved by the standard conjugate gradient method. We show that the ADMM algorithm achieves sublinear or linear convergence rates to the KKT solutions under different conditions. Building on this theoretical development, we present LoRADS, a new solver for linear SDP based on the Low-Rank ADMM Splitting approach. LoRADS incorporates several strategies that significantly increase its efficiency. Firstly, it initiates with a warm-start phase that uses the Burer-Monteiro approach. Moreover, motivated by the SDP low-rank theory [So et al. 2008], LoRADS chooses an initial rank of logarithmic order and then employs a dynamic approach to increase the rank. Numerical experiments indicate that LoRADS exhibits promising performance on various SDP problems. A noteworthy achievement of LoRADS is its successful solving of a matrix completion problem with 15,694,167 constraints and a matrix variable of size 40,000×40,000 in 351 seconds.

216:50 — ** CANCELLED ** Scaling Bayesian optimization with Multi-agent Rollout

Solving black-box global optimization problems efficiently across domains remains challenging especially for large scale optimization problems. Bayesian optimization has obtained important success as a black box optimization technique based on surrogates, but it still suffers when applied to large scale heterogeneous landscapes. Recent approaches have proposed non-myopic approximations and partitioning of the input domain into subregions to prioritize regions that capture important areas of the solution space. We propose a Multi Agent Rollout formulation of Bayesian optimization (MAroBO) that partitions the input domain among finite set of agents for distributed sampling. In addition to selecting candidate samples from their respective sub regions, these agents also influence each other in partitioning the sub regions. Consequently, a portion of the function is optimized by these agents which prioritize candidate samples that don’t undermine exploration in favor of a single step greedy exploitation. The efficacy of MAroBO is demonstrated on synthetic test functions.