1 — 14:00 — Optimal L-p Risk Minimization
Convex risk measures play a foundational role in the area of stochastic optimization. However, in contrast to risk neutral models, their applications are still limited due to the lack of efficient solution methods. In particular, the mean L-p semi-deviation is a classic risk minimization model, but its solution is highly challenging due to the composition of concave-convex functions and the lack of uniform Lipschitz continuity. In this talk, we discuss some progresses on the design of efficient algorithms for l-p risk minimization, including a novel lifting reformulation to handle the concave-convex composition, and a new stochastic approximation method to address the non-Lipschitz continuity. We establish an upper bound on the sample complexity associated with this approach and show that this bound is not improvable for L-p risk minimization in general.
2 — 14:30 — Epi-Consistency of Multistage Stochastic Optimization Problems
We study the statistical consistency of sample average approximations of infinite-horizon multistage stochastic optimization problems. Existing results establish the consistency of minima using arguments which rely on the stage cost functions being bounded. When this does not hold, a different approach is required. To this end we develop an existence and consistency result for the approximation of fixed-point problems in metric spaces which can be applied to infinite-horizon problems. Utilising the Attouch-Wets distance on a suitable space of approximating functions and results on the epi-convergence of expectation functions with varying measures and integrands, we show how epi-consistency can be assured when the stage cost functions are unbounded. The arguments are also applied to finite-horizon multistage stochastic optimization problems. A number of examples are considered.
3 — 15:00 — Three-stage Stochastic Programming is As Easy As Classical Stochastic Optimization
Multistage stochastic programming poses significant challenges due to the curse of dimensionality. In this talk, we study three-stage problems, often perceived as more challenging than the classical stochastic optimization. Remarkably, we develop a randomized gradient-based algorithm that achieves the same complexity bound as classical stochastic optimization. This unexpected outcome challenges conventional wisdom regarding the difficulty of three-stage problems, suggesting that prevailing beliefs may not always hold true. The development of such a randomized gradient-based algorithm opens up exciting directions for advancing algorithmic design in multistage stochastic programming.
4 — 15:30 — Risk averse and distributionally robust modeling of multistage stochastic programming
It is known that there is a duality relation between risk-averse and distributionally robust approaches to stochastic programming. An extension from the static to dynamic (multistage) settings is not straightforward. This involves such basic concepts as conditional counterparts of risk-averse/distributionally robust functionals, dynamic equations and time consistency of nested formulations of multistage stochastic programs. In this talk we present a point of view on these topics in the frameworks of Stochastic Programming, Stochastic Optimal Control and MDP.