114:00 — Symplectic Extra-gradient Type Method for Solving General Non-monotone Inclusion Problem

  • Yi Zhang, Institute of Computational Mathematics And Scientific/Engineering Computing

The min-max optimization has attracted substantial interest given its critical role in the training of Generative Adversarial Networks and solving Nash equilibrium. Because the first-order condition of min-max problem can be written as an inclusion problem, the extragradient method, a efficient method to solve inclusion problem, has gained wide attention. Recently, accelerated versions of the extragradient method inspired by Halpern's iteration have been under intensive study. Notably, however, Halpern's iteration can suffer from computational inefficiencies since its iterative rule is a convex combination of the starting and current feasible points. In this talk, we present an overview of the fundamental theory behind a brand new acceleration technique, referred to as symplectic acceleration. Leveraging this symplectic acceleration, we derive the symplectic extra-gradient method and its subsequent variations. We employ a Lyapunov function to demonstrate that the symplectic extra-gradient method and its variants exhibit an inverse of quadratic convergence rate in terms of the squared gradient norm. Furthermore, we refine the proof techniques for proving sharper convergence results of symplectic acceleration to establish a sharper convergence rate for the symplectic extra-gradient type method. For improved practical performance, we apply a line search technique into the symplectic extra-gradient framework. From a theoretical standpoint, we guarantee the convergence of the symplectic extra-gradient type method with line search. Empirically, numerical experiments show computational efficiency of the symplectic extra-gradient method when combined with line search. This adaptation exhibits faster convergence rates in practice compared to existing extra-gradient type methods.

214:30 — Quantum Algorithms for Multi-objective Optimization

Quantum computing is a new computational paradigm that has the potential to disrupt certain disciplines, with (combinatorial) optimization frequently being mentioned as one of them. However, classical algorithms and heuristics can often achieve very good results quickly, leaving little room for further improvements.

Multi-objective optimization represents a class of optimization problems where the goal is to find the optimal trade-offs between multiple objective functions. Thus, it is usually not the goal to find a single solution, but (a good approximation of) the Pareto frontier, i.e., the collection of all Pareto optimal solution.

In this context, the sampling-based nature of many quantum optimization algorithms can be beneficial as it can quickly produce a variety of good solutions to approximate the Pareto frontier.

We demonstrate how quantum optimization can be efficiently applied to certain multi-objective combinatorial optimization problems and present first promising results that show that multi-objective optimization could be a problem class particularly suitable for quantum computers.

315:00 — Solving the semidefinite relaxation of QUBOs in matrix multiplication time, and faster with a quantum computer

Recent works on quantum algorithms for solving semidefinite optimization (SDO) problems have leveraged a quantum-mechanical interpretation of positive semidefinite matrices to develop methods that obtain quantum speedups with respect to the dimension n and number of constraints m. While their dependence on other parameters suggests no overall speedup over classical methodologies, some quantum SDO solvers provide speedups in the low-precision regime. We exploit this fact to our advantage, and present an iterative refinement scheme for the Hamiltonian Updates algorithm of Brandao et al. (Quantum 6, 625 (2022)) to exponentially improve the dependence of their algorithm on the precision $\epsilon$, defined as the absolute gap between primal and dual solution. As a result, we obtain a classical algorithm to solve the semidefinite relaxation of Quadratic Unconstrained Binary Optimization problems (QUBOs) in matrix multiplication time. Provided access to a quantum read/classical write random access memory (QRAM), a quantum implementation of our algorithm exhibits $O(ns + n^{1.5} polylog(n, || C ||_F, 1/\epsilon))$ running time, where C is the cost matrix, $|| C ||_F$ is its Frobenius norm, and s is its sparsity parameter (maximum number of nonzero elements per row).

415:30 — Quantum Langevin Dynamics for Optimization

We initiate the study of utilizing Quantum Langevin Dynamics (QLD) to solve optimization problems, particularly those non-convex objective functions that present substantial obstacles for traditional gradient descent algorithms. Specifically, we examine the dynamics of a system coupled with an infinite heat bath. This interaction induces both random quantum noise and a deterministic damping effect to the system, which nudge the system towards a steady state that hovers near the global minimum of objective functions. We theoretically prove the convergence of QLD in convex landscapes, demonstrating that the average energy of the system can approach zero in the low temperature limit with an exponential decay rate correlated with the evolution time. Numerically, we first show the energy dissipation capability of QLD by retracing its origins to spontaneous emission. Furthermore, we conduct detailed discussion of the impact of each parameter. Finally, based on the observations when comparing QLD with classical Fokker-Plank-Smoluchowski equation, we propose a time-dependent QLD by making temperature and ℏ time-dependent parameters, which can be theoretically proven to converge better than the time-independent case and also outperforms a series of state-of-the-art quantum and classical optimization algorithms in many non-convex landscapes.