108:30 — GPU-Accelerated Deterministic Global Optimization

Deterministic global optimization is a powerful tool with unique value in parameter estimation and model validation tasks, among others, yet the long run-time of the standard branch-and-bound (B&B) deterministic search algorithm limits its application to many practical problems of interest to engineers. One potential path to shorter run-times is the use of alternative computing hardware architectures such as graphics processing units (GPUs). Unlike CPUs, which can execute essentially any computing operation, GPUs are designed for a much greater throughput of a more limited set of mathematical operations. This design enables significantly greater overall computing speeds for parallelizable operations—which, within the context of GPUs, is most often tensor algebra—if the calculation volume is large enough to warrant the memory transfer between hardware elements. Being able to run B&B algorithms on GPU hardware would open the door for shorter run-times for everyday practitioners, as well as enable better utilization of supercomputers to solve larger global optimization problems, with great promise for increased tractability of problems with practical importance.
Our research group has been, to the best of our knowledge, the pioneers in developing deterministic global optimization algorithms that utilize relaxation-based lower-bounding routines that run on GPUs. In recent years, we introduced parallelized and GPU-compatible methods of calculating McCormick-based relaxations and their subgradients, which are integral components of B&B lower-bounding routines. We then developed the first, to our knowledge, GPU-based B&B algorithm that utilized relaxation evaluations to calculate lower bounds, which offloaded the entire lower-bounding step—typically the most computationally expensive step in B&B—to the faster GPU hardware. These previous advances established the foundation of the novel contributions of this work.
In this talk, we will introduce our latest developments in the creation of a custom linear programming (LP) solver designed to parallelize the solution of a large number of small LPs on a GPU, which enables the use of LP-based lower-bounding methods. This talk will cover the development of parallelized McCormick-based relaxations and relaxation subgradients, including details of their performance on GPUs (such as vis-a-vis GPU warp divergence characteristics), our implementation of the GPU-parallelized LP solver, and a numerical example comparing our algorithms to a state-of-the-art standard B&B solver. These developments represent a crucial first step toward enabling more efficient and faster deterministic global optimization algorithms that can exploit modern hardware’s parallelization capabilities to obtain solutions that remain outside the reach of existing serial B&B approaches.

209:00 — Centrality of shortest paths: algorithms and complexity results

The degree centrality of a node, defined as the number of nodes adjacent to it, is often used as a measure of importance of a node to the structure of a network. This metric can be extended to paths in a network, where the degree centrality of a path is defined as the number of nodes adjacent to it. In this paper, we consider the problem of finding the most degree-central shortest path in an unweighted network. We propose a polynomial algorithm with the worst-case running time of $O(|E||V|^2\Delta(G))$, where $|V|$ is the number of vertices in the network, $|E|$ is the number of edges in the network, and $\Delta(G)$ is the maximum degree of the graph. We conduct a numerical study of our algorithm on synthetic and real-world networks and compare our results to the existing literature. In addition, we show that the same problem is NP-hard when a weighted graph is considered. We also extend the problem to other centrality measures and prove complexity results for these measures.

309:30 — Simple Randomized Rounding for Max-Min Eigenvalue Augmentation

We consider the max-min eigenvalue augmentation problem, which captures both the maximum algebraic connectivity augmentation and E-optimal design problems. The goal in the maximum algebraic connectivity augmentation problem is to add k edges to a graph to maximize its algebraic connectivity, while the goal in the E-optimal design problem is to choose k design points that minimize the variance of the corresponding ordinary least squares estimator in its direction of greatest variance. There are two distinct research streams dedicated to these problems that develop approximation algorithms based on rounding optimal semidefinite programming relaxation solutions. Interestingly, both research streams empirically evaluate the performance of "simple" randomized rounding methods but do not establish theoretical approximation guarantees for them. In this talk we establish approximation guarantees for a simple randomized rounding method applied to the general max-min eigenvalue augmentation problem, and we investigate its empirical performance through computational experiments.

410:00 — Convexification of optimization problems involving the Euclidean norm

Several important open problems in chemical physics, mathematics, and operations research can be posed as nonconvex global optimization problems involving the Euclidean norm. For example, pairwise potential energy minimization, object packing and cutting, the kissing number problem, and planar facility location problems admit a representation of this form. We present new formulations and convexification strategies for problems involving the Euclidean norm and use them to solve several instances to global optimality for the first time.