1 — 14:00 — Assessing solution quality in risk-averse stochastic programs
In stochastic programming algorithms, assessing whether a candidate solution is near-optimal is of vital importance. For risk-neutral stochastic programs, this is typically done by estimating the optimality gap using a Monte Carlo estimator. For risk-averse stochastic programs, however, these estimators can underestimate the true optimality gap due to a bias in the Monte Carlo estimator of a risk measure. In this presentation, we propose a Monte Carlo method that yields valid estimates for the optimality gap in risk-averse stochastic programs with a coherent risk measure. Our method avoids the aforementioned bias by using two independent samples, each estimating a different element constituting the optimality gap. We address both tractability of our approach and the quality of the resulting optimality gap estimates.
2 — 14:30 — An implementation of the Adaptive Partition Method to Stochastic Bilevel Problems
The Adaptive Partition Method (APM) has proven to be an efficient way to solve two-stage stochastic problems. The underlying idea is that many scenarios are similar in some sense. The method aims to find a partition of the sample space and solve an aggregated master problem that leads to the same optimal solution as the original problem. This work focuses on extending the APM to stochastic bilevel problems. For the first implementation, we investigate the more convenient structures of such problems and then discuss how to deal with problems where this structure is absent. We complement our work with numerical experiments, which show that APM is a valid approach for these problems.
3 — 15:00 — Decision-focused predictions via pessimistic bilevel optimization: a computational study
Dealing with uncertainty in optimization parameters is an important and longstanding challenge. Typically, uncertain parameters are predicted accurately, and then a deterministic optimization problem is solved. However, the decisions produced by this so-called predict-then-optimize procedure can be highly sensitive to uncertain parameters. In this work, we contribute to recent efforts in producing decision-focused predictions, i.e., to build predictive models that are constructed with the goal of minimizing a regret measure on the decisions taken with them.
We formulate the exact expected regret minimization as a pessimistic bilevel optimization model. Then, using duality arguments, we reformulate it as a non-convex quadratic optimization problem. Finally, we show various computational techniques to achieve tractability. We report extensive computational results on shortest-path instances with uncertain cost vectors. Our results indicate that our approach can improve training performance over the approach of Elmachtoub and Grigas (2022), a state-of-the-art method for decision-focused learning.
4 — 15:30 — Application-Driven Optimal Pointwise Forecasts for Stochastic Optimization
We consider the class of two-stage stochastic programs with uncertainty only on the right hand side. Such a class encompasses practical many problems, especially in inventory models. We show that, under certain conditions, there exist an optimal scenario, in the sense that solving the problem with that scenario yields the same optimal solution as the original problem. In a data-driven context, the result means that pointwise forecasts of the uncertain variables can be optimal, and we show that such an optimal forecast can in principle be obtained by a decision-focused learning approach.