1 — 14:00 — Service Oriented Considerate Routing: Data, Predictions and Robust Decisions
In this research, we focus on improving service oriented routing by addressing the nuanced challenge of punctuality through the consideration of couriers’ ability to ensure on-time deliveries. We utilize a compre- hensive real-world dataset from a cold chain logistics firm for analysis. Our empirical investigation indicates that relying solely on travel distance is inadequate for accurate delivery time prediction. We highlight critical elements, including couriers’ fixed effects and workload, as key covariates to improve prediction performance. Distinguishing our work from existing literature, we integrate couriers’ workload and location familiarity into our Service Oriented Routing (SOR) model to enhance predictions of delivery times. We introduce the Courier Assigned Location Mismatch (CALM) metric as a less intrusive approach to incorporating couriers’ location familiarity into their delivery efficiency. We propose the novel Service Oriented Considerate Routing (SOCR) model; by minimizing the CALM metric, couriers are assigned routes within familiar territories to the extent possible within the total routing distance constraint. The considerate routing strategies could potentially reduce the stress couriers face when delivering in unfamiliar areas. Additionally, we develop the connection of the SOCR model with a robust satisficing approach. This strategy guarantees timely deliveries by effectively mitigating the effects of predictive inaccuracies and potential model misspecifications. To solve the SOCR model, we apply Benders decomposition for an exact solution and Tabu Search for a heuristic approach, demonstrating their effectiveness and superior out-of-sample performance. Notably, our heuristic solutions significantly outperform exact solutions of classical vehicle routing problems with deadlines, result- ing in substantial improvements in timely delivery performance.
2 — 14:30 — Directional Gradients and Surrogate Losses in the Predict-then-Optimize Framework
We propose a novel family of decision-aware surrogate losses based on directional gradients for the predict-then-optimize framework. These losses directly approximate the downstream decision loss and can be optimized using off-the-shelf gradient-based methods. Importantly, unlike existing surrogate losses, the approximation error of our PG losses vanishes as the number of samples grows. This implies that optimizing our surrogate loss yields a best-in-class policy asymptotically, even in misspecified settings. This is the first such result in misspecified settings, and we provide numerical evidence confirming our PG losses substantively outperform existing proposals when the underlying model is misspecified, and the noise is not centrally symmetric. Insofar as misspecification is commonplace in practice -- especially when we might prefer a simpler, more interpretable model -- PG losses offer a novel, theoretically justified method for computationally tractable decision-aware learning.
3 — 15:00 — Decoupling Learning and Decision-Making: Breaking the $O(\sqrt{T})$ Barrier in Online Resource Allocation with First-Order Methods
Online linear programming plays an important role in both revenue management and resource allocation, and recent research has focused on developing efficient first-order online learning algorithms. Despite the empirical success of first-order methods, they typically achieve a regret no better than $\mathcal{O}(\sqrt{T})$, which is suboptimal compared to the $\mathcal{O}(\log T)$ bound guaranteed by the state-of-the-art linear programming (LP)-based online algorithms. This paper establishes several important facts about online linear programming, which unveils the challenge for first-order-method-based online algorithms to achieve beyond $\mathcal{O}(\sqrt{T})$ regret. To address the challenge, we introduce a new algorithmic framework that decouples learning from decision-making. More importantly, for the first time, we show that first-order methods can attain regret $\mathcal{O}(T^{1/3})$ by this new framework. Lastly, we conduct numerical experiments to validate our theoretical findings.
4 — 15:30 — Optimizer's Information Criterion: Dissecting and Removing Bias in Data-Driven Optimization
We develop a general bias correction approach, building on what we call Optimizer's Information Criterion (OIC), that directly approximates the first-order bias and does not require solving any additional optimization problems. Our OIC helps evaluate the objective performance in data-driven optimization, which crucially involves not only model fitting but also its interplay with the downstream optimization. As such OIC can be used for decision selection instead of just model selection. We apply our approach to a range of data-driven optimization formulations comprising empirical and parametric models, their regularized counterparts, and furthermore contextual optimization. Finally, we provide numerical validation on the superior performance of our approach under synthetic and real-world datasets.