114:00 — Robust Optimization Under Controllable Uncertainty

Applications for optimization with uncertain data in practice often feature a possibility to reduce the uncertainty at a given query cost, e.g., by conducting measurements, surveys, or paying a third party in advance to limit the deviations. To model this type of applications, we introduce the concept of optimization problems under controllable uncertainty (OCU). For an OCU we assume the uncertain cost parameters to lie in bounded, closed intervals. The optimizer can shrink each of these intervals around a certain value called hedging point, possibly reducing it to a single point. Depending on whether the hedging points are known in advance or not, different types of OCU arise. Moreover, the models may differ with respect to when the narrowing down, the underlying optimization, and/or the revelation of true data take place.
In the talk, we discuss two example problem settings - one with known and one with unknown hedging points - in more detail, where we handle the remaining uncertainty by the paradigm of robust optimization. For both cases, we give conditions under which a single-level reformulation is possible. Throughout the talk, we use shortest-path problems as underlying optimization problem for illustrating specifics of and phenomena arising in OCU.

214:30 — Clustering-based scenario reduction for distributionally robust optimization with approximation guarantees

Stochastic and (distributionally) robust optimization problems often become computationally challenging as the number of scenarios increases. Therefore, scenario reduction is a key technique for reducing the problem size of uncertain optimization programs. We introduce a dimension reduction method for distributionally robust optimization based on the clustering of the scenario set. We prove quantifiable approximation bounds by appropriately projecting the original ambiguity set onto the reduced set of scenarios. Our methodology applies to both continuous and discrete random variables. Numerical experiments on mixed-integer benchmark instances yield significant reductions in solution time with a small approximation error.

315:00 — ** CANCELLED ** Distributionally Robust Optimization Approaches for Resilience and Low Carbon Enhancement of Distribution Power Systems

Resilience and low carbon considerations are key factors in assessing the performance of distribution power systems. In this study, we investigate the integration of microgrids into the main grid, which offer the ability to effectively handle power-disrupting incidents through islanding capabilities and facilitate the integration of renewable energy sources. Our focus lies in defining resilience and proposing a multi-stage distributionally robust optimization model that encompasses the stages of preparation, response, and restoration. Additionally, we extend our model to include the cost of carbon emissions as part of the objective. To address the problem at hand, we formulate the model as a mixed-integer linear programming problem, ensuring the adherence of chance constraints to the worst-case distribution within a novel ambiguity set, which combines the Wasserstein distance and first-order moment information. Numerical experiments on the IEEE 34-bus and 118-bus test systems demonstrate the significant improvements achieved by the incorporation of microgrids.

415:30 — Streamlining Emergency Response: A K-Adaptable model and a Column-and-Constraint-Generation Algorithm

Emergency response refers to the systematic response to an unexpected, disruptive occurrence such as a natural disaster. The response aims to mitigate the consequences of the occurrence by providing the affected region with the necessary supplies. A critical factor for a successful response is its timely execution, but the unpredictable nature of disasters often prevents quick reactionary measures. Preallocating the supplies before the disaster takes place allows for a faster response, but requires more overall resources because the time and place of the disaster are not yet known. This gives rise to a trade-off between how quickly a response plan is executed and how precisely it targets the affected areas. Aiming to capture the dynamics of this trade-off, we develop a K-adjustable robust model, which allows a maximum of K second-stage decisions, i.e., response plans. This mitigates tractability issues and allows the decision-maker to seamlessly navigate the gap between the readiness of a proactive yet rigid response and the accuracy of a reactive yet highly adjustable one. The approaches we consider to solve the K-adaptable model are threefold: Approximately, via a partition-and-bound method, and optimally via a branch-and-bound method as well as a static robust reformulation in combination with a column-and-constraint generation algorithm. In a computational study, we compare and contrast the different solution approaches and assess their potential.