114:00 — Robust End-to-End Learning under Endogenous Uncertainty

The decisions one takes can often affect the outcome observed. How can we effectively learn to make decisions when there are no ground-truth counterfactual observations? We propose a robust end-to-end learning approach to the contextual stochastic optimization problem under decision-dependent uncertainty. We tackle this problem by constructing uncertainty sets over the space of ML models and present efficient algorithms to solve these inherently non-convex optimization problems. We computationally test the proposed approach on multi-item pricing and assortment problems where demand is affected by cross-item complementary and supplementary effects. Moreover, we show the proposed approach outperforms traditional methods by more than 20\%.

214:30 — Robust Actionable Prescriptive Analytics

We propose a new robust actionable prescriptive analytics framework that leverages past data and side information to minimize a risk-based objective function under distributional ambiguity. Our framework aims to find a policy that directly transforms the side information into implementable decisions.
Specifically, we focus on developing actionable response policies that offer the benefits of interpretability and implementability. To address the potential issue of overfitting to empirical data, we adopt a data-driven robust satisficing approach that effectively handles uncertainty. We tackle the computational challenge for linear optimization models with recourse by developing a new tractable safe approximation for robust constraints, accommodating bilinear uncertainty and general norm-based uncertainty sets. Additionally, we introduce a biaffine recourse adaptation to enhance the quality of the approximation. Furthermore, we present a localized robust satisficing model that efficiently solves combinatorial optimization problems with tree-based static policies. Finally, we demonstrate the practical application of our framework through a simulation case study on risk-minimizing portfolio optimization using past returns as side information. We also provide a simulation case study on how the framework can be applied to obtain an interpretable policy for allocating taxis to different demand regions in response to weather information.

315:00 — Integrated Conditional Estimation-Optimization

Many real-world optimization problems involve uncertain parameters with probability distributions that can be estimated using contextual feature information. In contrast to the standard approach of first estimating the distribution of uncertain parameters and then optimizing the objective based on the estimation, we propose an integrated conditional estimation-optimization (ICEO) framework that estimates the underlying conditional distribution of the random parameter while considering the structure of the optimization problem. We directly model the relationship between the conditional distribution of the random parameter and the contextual features, and then estimate the probabilistic model with an objective that aligns with the downstream optimization problem. We show that our ICEO approach is asymptotically consistent under moderate regularity conditions and further provide finite performance guarantees in the form of generalization bounds. Computationally, performing estimation with the ICEO approach is a non-convex and often non-differentiable optimization problem. We propose a general methodology for approximating the potentially non-differentiable mapping from estimated conditional distribution to the optimal decision by a differentiable function, which greatly improves the performance of gradient-based algorithms applied to the non-convex problem. We also provide a polynomial optimization solution approach in the semi-algebraic case. Numerical experiments are also conducted to show the empirical success of our approach in different situations including with limited data samples and model mismatches.

415:30 — Conformal Contextual Optimization with a Smart Predict-then-Optimize Method

In contextual optimization, a decision-maker utilizes a machine learning (ML) model for predicting uncertain parameters of a downstream optimization model based on contextual features. We study an extension of contextual stochastic linear optimization (CSLO) that, in contrast to most of the existing literature, involves inequality constraints that depend on the uncertain parameters that are predicted by the ML model. Building on previous work that develops the “Smart Predict-then-Optimize (SPO)” loss and its tractable SPO+ surrogate loss in the case of a known deterministic feasible region, and work that develops robust variants of contextual optimization using conformal prediction methods, we propose a “Conformal Smart Predict-then-Optimize (CSPO)” approach for addressing uncertainty in the constraints. Specifically, we first propose the CSPO loss – a direct extension of the SPO loss – that measures the decision error, or regret, induced by following a robust predict-then-optimize approach that uses a conformal prediction method to produce an uncertainty set. We then propose a convex surrogate loss function called the CSPO+ loss – a direct extension of the SPO+ loss – to tractably train a prediction model in our CSLO setting. To effectively train the model, we employ a data filtering procedure to address issues of infeasibility whenever the predicted uncertainty set fails to cover the true parameter values. Furthermore, we utilize importance sampling when training the model to address the distribution shift induced by the data filtering step. Theoretically, we establish statistical consistency of the CSPO+ loss relative to the CSPO loss and provide finite-sample convergence guarantees for the CSPO loss with importance sampling under mild assumptions. Experimentally, we demonstrate strong performance of the CSPO+ loss on several different CSLO problem classes.