2 — 15:30 — Random Algorithms for Large-Scale Optimization: from expansive Markov operators to stochastic tomography in X-ray imaging
Randomization of algorithms is an established technique for dealing with large scale optimization problems. The convergence analysis, however, is limited for the most part to convex problems and almost sure convergence of the ergodic sequence or similar, and as such the conventional analytic strategies only apply to a subset of important problems. Viewing these algorithms as Markov chains on a space of probability measures allows one to go beyond almost sure convergence, but here the state of the art for the theory is not much better, being limited to Markov operators that are contractions or, until only recently, nonexpansive. In the convergence theory for deterministic algorithms, we have shown that calmness of set-valued mappings, balanced with metric subregularity at fixed points suffices to guarantee local convergence with rates of any algorithm that can be written as a fixed point iteration. Lifting these properties to their stochastic analogs on probability measure spaces yields a general, but conceptually simple, framework for the analysis of broad class of stochastic algorithms that provides sufficient conditions for local convergence in distribution, with rates, in wide variety of settings. To demonstrate the potential of this approach, we survey two new results enabled by our framework. One of these relates to the computation of Fréchet means of phylogenetic trees in a Hadamard space. The second result, which we spend a little more time with, concerns convergence in distribution of a classical stochastic gradient descent algorithm applied to a nonconvex stochastic tomography problem in X-ray imaging.