114:00 — Budget-Driven Multi-Period Hub Location: A Robust Time Series Approach

  • Jie Hu, University of California San Diego
  • Zhi Chen, The Chinese University of Hong Kong
  • Shuming Wang, University of Chinese Academy Of Sciences

We study the multi-period hub location problem with uncertain periodic demands. In particular, we construct a nested ambiguity set that characterizes uncertain periodic demands via a general multivariate time series model, and to ensure stable periodic cost flows, we propose to constrain each expected periodic cost within a budget while maximizing the robustness level (i.e., the size) of the ambiguity set. Statistically, the nested ambiguity set ensures the model's solution to enjoy finite-sample performance guarantees under certain regularity conditions on the underlying process of the stochastic demand. Computationally, the uncapacitated model can be efficiently solved via a bisection search algorithm. Numerical experiments demonstrate the attractiveness and competitiveness of our proposed modeling and solution approaches.

214:30 — Network Flow Models for Robust Binary Optimization with Selective Adaptability

Adaptive robust optimization problems have received significant attention in recent years, but remain notoriously difficult to solve when recourse decisions are discrete in nature. In this paper, we propose new reformulation techniques for adaptive robust binary optimization problems (ARBO) with objective uncertainty. Our main contribution revolves around a collection of exact and approximate network flow reformulations for the ARBO problem, which we develop by building upon ideas from the decision diagram literature.

315:00 — Robust Optimization with Moment-Dispersion Ambiguity

  • Li Chen, The University of Sydney
  • Chenyi Fu, Northwestern Polytechnical University
  • Fan Si, National University of Singapore
  • Melvyn Sim, National University of Singapore
  • Peng Xiong, National University of Singapore

Robust optimization presents a compelling methodology for optimization under uncertainty, providing a practical, ambiguity-averse evaluation of risk when the probability distribution is encapsulated by an ambiguity set. We introduce the moment-dispersion ambiguity set, an improvement on the moment-based set, enabling separate characterization of a random variable's central location, dispersion, and support. To describe dispersion, we define the dispersion characteristic function, capturing complex attributes like sub-Gaussian and asymmetric dispersion, and its associated dispersion characteristic set, which serves as the input format for representing dispersion ambiguity in algebraic modeling tools. We devise a process for constructing and integrating ambiguity sets, showcasing their modeling flexibility. In particular, we introduce the independence propensity hyper-parameter to foster joint ambiguity set creation for multiple random variables, enhancing our model's real-world applicability and facilitating varying inter-dependence characterization without needing a correlation matrix. For ambiguous risk assessment over moment-dispersion ambiguity sets, we develop safe tractable approximations for assessing entropic risks linked with affine and convex piecewise affine cost functions, accommodating varying risk tolerances. Lastly, we substantiate our approach with two numerical case studies involving appointment scheduling and portfolio optimization. By adjusting the independence propensity hyper-parameter, we illustrate how our model can deliver robust yet less conservative solutions compared to existing moment-based robust optimization models and sample-based approaches, which is particularly useful when only marginal information is accessible to the decision-maker.

415:30 — Wasserstein Regularization for 0-1 Loss

  • Zhen Yang, The University of Texas at Austin
  • Rui Gao, University of Texas at Austin

Wasserstein distributionally robust optimization (DRO) seeks robust solutions by safeguarding against data variations within a specified Wasserstein ball. While its regularization effects and generalization capabilities have been extensively explored for continuous losses, directly applying these insights to the 0-1 loss presents significant challenges. This study investigates this issue within the context of linear classification, establishing a connection between Wasserstein DRO with the 0-1 loss and a novel regularization framework. In this framework, the regularization term is defined as a polynomial function of the Wasserstein ball's radius and the data density near the decision boundary. We propose a radius selection rule that guarantees finite-sample performance. Our findings highlight a fundamental distinction in radius selection between 0-1 and continuous losses: a smaller radius, contrary to the conventional root-n rule, is often sufficient in practical scenarios. Numerical experiments validate the efficiency of our suggested radius selection rule.