108:30 — A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction

We develop an implementable stochastic proximal point (SPP) method for a class of weakly convex, composite optimization problems. The proposed stochastic proximal point algorithm incorporates a variance reduction mechanism and the implicit SPP updates are computed by applying an inexact semismooth Newton method to a subproblem that is typically small-scale. We establish a convergence theory that takes the inexactness of the SPP steps into account and which is in accordance with existing convergence guarantees of (proximal) stochastic variance-reduced gradient methods. Numerical experiments show that the proposed algorithm compares favorably with other state-of-the-art methods and achieves higher robustness with respect to the step size selection.

209:00 — Applying Random projection techniques to large-scale nonconvex optimization problems

Random projection techniques based on the Johnson-Lindenstrauss lemma are used for randomly aggregating the constraints or variables of optimization problems while approximately preserving their optimal values, which leads to smaller-scale optimization problems. In this talk, we show a few applications of random matrix techniques for constructing random subspace algorithms that iteratively solve smaller-scale subproblems and discuss their convergence speed. This talk is based on joint works with Ryota Nozawa, and Pierre-Louis Poirion.

309:30 — Random projections for quadratically constrained quadratic programming

We consider quadratically constrained quadratic programs (QCQP) in a general form with both linear and quadratic constraints. We will show that, if the number of quadratic constraints m and the number of decision variables n are large enough, there exist two random matrices T (with k rows and m columns) and R (with r rows and n columns) such that it is possible to apply them to the parameters and variables of the original problem in order to obtain another QCQP with fewer decision variables and quadratic constraints which is easier and faster to solve.
We will provide theoretical results showing that the projected program is related to the original one in terms of both feasibility and optimality and a procedure that exploits a solution of the projected program to obtain a solution of the original one.
Numerical experiments on some different classes of QCQPs will be presented.

410:00 — Random subspace Newton method for unconstrained non-convex optimization

Abstract: In this talk, we present a randomized subspace regularized Newton method for a non-convex function. We show that our method has global convergence under appropriate assumptions, and its convergence rate is the same as that of the full regularized Newton method. Furthermore, we can obtain a local linear convergence rate, under some additional assumptions, and prove that this rate is the best we can hope, under local strong convexity assumptions, when using random subspace. Finally, we will prove that under rank deficiency conditions, one can obtain super-linear convergence in the subspace setting.