1 — 14:00 — Multi-Stage Stochastic Programming for Integrated Hurricane Evacuation and Logistics Planning
We study an integrated hurricane relief logistics and evacuation planning problem, integrating hurricane evacuation and relief item pre-positioning operations that are typically treated separately. We propose a fully adaptive multi-stage stochastic programming (MSSP) model and solution approaches based on two-stage stochastic programming (2SSP). Utilizing historical forecast errors modeled using the auto-regressive model of order one, we generate hurricane scenarios and approximate the hurricane process as a Markov chain, and each Markovian state is characterized by the hurricane's location and intensity attributes. We conduct a comprehensive numerical experiment based on case studies motivated by Hurricane Florence and Hurricane Ian. Through the computational results, we demonstrate the value of fully adaptive policies given by the MSSP model over static ones given by the 2SSP model in terms of the out-of-sample performance. By conducting an extensive sensitivity analysis, we offer insights into how the value of fully adaptive policies varies in comparison to static ones with key problem parameters.
2 — 14:30 — Multistage Chance-Constrained Programming
In this talk, we study multistage chance-constrained programming. We consider two possibilities for the chance constraint: (i) enforced statically, just at the end of the planning horizon and (ii) enforced dynamically over time. For both cases, we provide decomposition-based computational approaches. We illustrate preliminary computational results on a stylized inventory problem. Moreover, we discuss the value of a multistage policy compared to a policy obtained from a two-stage multiperiod model.
3 — 15:00 — Interpretable Vaccine Administration and Inventory Replenishment Policies via Smooth-in-Expectation Decision Rules
The effective management of vaccine vials and inventory is imperative for ensuring widespread immunization coverage. We aim to address the challenges associated with this problem, including the need for interpretable policies. We propose a Markov decision process model, which offers flexibility to accommodate various operational aspects such as patient queues, clinic early closure considerations, and the requirement to serve every patient until exhausting vaccine inventory. For developing interpretable policies, we employ smooth-in-expectation decision rules, a recently proposed approach for multistage stochastic programs with mixed-integer recourse decisions and very many stages (e.g., hundreds). Leveraging these decision rules, we formulate and optimize the vaccine administration policies via a problem-specific flowchart design, alongside vial ordering decisions, facilitating interpretability and adaptability in real-world healthcare settings. We implement a batch stochastic gradient descent algorithm to solve the optimization problem, where for the initialization step we use the solution of a two-stage stochastic programming model we propose as an approximation to the multistage problem. Through extensive numerical experiments, we demonstrate the efficiency of the proposed approach and highlight the efficacy of various policies, including those considering patient queues.
4 — 15:30 — Solving multistage equilibrium problems: the Krusell-Smith model
In this study, we consider multistage problems involving multiple agents, commonly recognized as stochastic dynamic games. Tackling such problems poses a formidable challenge, particularly in real-world scenarios where a multitude of agents are at play. We present a general formulation of the problem, and we focus on an incomplete market, heterogeneous agent model with aggregate uncertainty, known as the Krusell-Smith model. We first show the collusive solution is different from the Nash equilibrium one in a simple, three-period case.
Our numerical exploration commences by discussing various strategies traditionally employed to derive equilibrium solutions for this complex problem. We begin with the moment approach, initially proposed by Krusell-Smith in '98.
We then introduce a novel strategy that distinguishes itself by not assuming a predefined function form for the agents' capital dynamics. Instead, our approach involves constructing a Markov Chain at each iteration, based on the states visited.
Furthermore, we conduct a comparative analysis of our solutions against those obtained through conventional approaches, providing insights into the relative strengths and weaknesses of each methodology.