1 — 16:20 — The quantum central path algorithm
We propose a novel quantum algorithm for linear optimization problems by quantum mechanical simulation of the central path. By combining our approach with iterative refinement techniques, we demonstrate an end-to-end asymptotic speedup over the best performing classical algorithms for a certain sparsity regime. We then discuss how to generalize our techniques to more complex classes of convex optimization problems, including linearly constrained convex quadratic optimization, and semidefinite optimization.
2 — 16:50 — Quantum speedups for stochastic optimization
We consider the problem of minimizing a continuous function given quantum access to a stochastic gradient oracle. We provide two new methods for the special case of minimizing a Lipschitz convex function. Each method obtains a dimension versus accuracy trade-off which is provably unachievable classically and we prove that one method is asymptotically optimal in low-dimensional settings. Additionally, we provide quantum algorithms for computing a critical point of a smooth non-convex function at rates not known to be achievable classically. To obtain these results we build upon the quantum multivariate mean estimation result of Cornelissen et al. 2022 and provide a general quantum-variance reduction technique of independent interest.
3 — 17:20 — Online learning of a panoply of quantum objects
In many quantum tasks, there is an unknown quantum object that one wishes to learn. In the online learning setting, one is tasked to adaptively refine a hypothesis to reproduce the measurement statistics of such an object. A common evaluation metric for such a strategy is its regret, or roughly the accumulated errors in the hypothesis statistics. We prove a sublinear regret bound for learning over general subsets of positive semidefinite matrices via the regularized-follow-the-leader algorithm and apply it to numerous settings. We establish a variety of matrix analysis results useful in quantum information theory, including a generalization of Pinsker's inequality for arbitrary positive semidefinite operators with possibly different trace.