1 — 08:30 — Away-step Frank-Wolfe algorithm for multiobjective optimization on polytopes
An away-step Frank-Wolfe algorithm designed for solving multiobjective optimization problems over polytopes is discussed. We prove that each limit point of the sequence generated by the algorithm is a weak Pareto optimal solution and, under additional conditions, we show linear convergence of the whole sequence to a Pareto optimal solution. At each iteration a nonsmooth subproblem needs to be solved. Some solution strategies for these subproblems are proposed for specific polytopes.
2 — 09:00 — A Near-optimal Method for Quadratic Constrained Composite Non-convex Non-smooth Problems
This paper studies first-order methods (FOMs) for solving \emph{composite non-convex non-smooth} optimization with linear constraints. Recently, the lower complexity bounds of FOMs on finding a (near) $\epsilon$-stationary point of the considered problem have been established. However, optimization algorithms that can achieve this lower bound have never been developed. This paper presents a primal-dual recovery method for solving the targeted problem. The oracle complexity of the proposed method to find a (near) $\epsilon$-stationary point of the considered problem matches the lower bounds up to a logarithmic factor. Therefore, our proposed method is almost non-improvable. We illustrate the advantages of our proposed algorithm over existing methods in the numerical experiments.
3 — 09:30 — Global and local convergence guarantees for bounded-rank optimization with a desingularization geometry
Low-rank optimization is difficult because the set of bounded-rank matrices is a non-smooth and non-convex algebraic variety. Existing paradigms include fixed-rank optimization and the LR parameterization. They lack either fast convergence guarantees to lesser rank points, or the ability to accumulate only at critical points. We seek optimization algorithms that overcome these flaws.
Khrulkov and Oseledets (2018) proposed to smoothly parameterize the low-rank variety via a desingularization. This allows to recast the optimization problem onto a smooth manifold. Building on their ideas, we propose to endow this desingularization with a metric to take advantage of existing Riemannian optimization algorithms. We study its geometry and show that this parameterization has multiple theoretical benefits. It notably preserves the compacity of sublevel sets and Polyak--Lojasiewicz-type conditions. We deduce global and local convergence guarantees for a range of algorithms, even at non-maximal rank points. Finally, we experimentally compare our algorithms to other existing techniques on matrix completion problems.