114:00 — An optimized composite gradient method for minimizing the gradient mapping

The gradient mapping norm is a strong and easily verifiable stopping criterion for first-order methods on composite problems. When the objective exhibits the quadratic growth property, the gradient mapping norm minimization problem can be solved by online, parameter-free and adaptive first-order schemes with near-optimal worst-case rates. No such methods can be applied when quadratic growth is absent. In this work, we propose a quasi-online parameter-free method addressing the entire class of composite problems with state-of-the-art optimal worst-case rates. Preliminary simulation results suggest that our scheme is highly competitive in practice with the existing approaches, even with those that rely on regularization and restarting strategies.

214:30 — ** CANCELLED ** Regularized Stein Variational Gradient Flow

  • Ye He, Georgia Institute of Technology

The Stein Variational Gradient Descent (SVGD) algorithm is a deterministic particle method for sampling. However, a mean-field analysis reveals that the gradient flow corresponding to the SVGD algorithm (i.e., the Stein Variational Gradient Flow) only provides a constant-order approximation to the Wasserstein Gradient Flow corresponding to the KL-divergence minimization. In this work, we propose the Regularized Stein Variational Gradient Flow, which interpolates between the Stein Variational Gradient Flow and the Wasserstein Gradient Flow. We establish various theoretical properties of the Regularized Stein Variational Gradient Flow (and its time-discretization) including convergence to equilibrium, existence and uniqueness of weak solutions, and stability of the solutions. We provide preliminary numerical evidence of the improved performance offered by the regularization.

315:00 — Tangent Subspace Descent via Discontinuous Subspace Selections on Fixed-Rank Manifolds

The tangent subspace descent method (TSD) extends the coordinate descent algorithm to manifold domains. The key insight underlying TSD is to draw an analogy between coordinate blocks in Euclidean space and tangent subspaces of a manifold. The core principle behind ensuring convergence of TSD for smooth functions is the appropriate choice of subspace at each iteration. Previously, it was shown that it is always possible to appropriately pick such subspaces on the broad class of manifolds known as naturally reductive homogeneous spaces. In this talk, we provide the first instances of TSD for manifolds outside of this class. The main idea underlying these new instances is the use of discontinuous subspace selections. As a result of our developments we derive new and efficient methods for large-scale optimization on the fixed-rank and fixed-rank positive semidefinite matrix manifolds.

415:30 — Randomized Riemannian Submanifold Subgradient Method for Optimization over Stiefel Manifold

In this talk, we present the randomized Riemannian submanifold subgradient method (RSSM), a lightweight "block-coordinate"-type algorithm for weakly convex optimization over the Stiefel manifold. We show that RSSM finds an $\epsilon$-nearly stationary point in $O(\epsilon^{-4})$ iterations. To the best of our knowledge, this is the first convergence guarantee of a coordinate-type algorithm for tackling non-convex non-smooth optimization over the Stiefel manifold.

This is joint work with Andy Yat-Ming Cheung, Jinxin Wang, and Man-Chung Yue.