1 — 08:30 — Platform Design for the First Mile of Commodity Supply Chains
We propose a data-driven platform that provides traceability to the first mile of agricultural supply chains by coordinating the transactions of farmers and intermediaries. We model unique aspects of the supply chain, including pre-existing informal relationships between farmers and intermediaries, and we develop algorithms to solve real-world instances. We test the results on data from the palm oil supply chain and show the platform’s potential to reduce costs and increase farmers’ welfare.
2 — 09:00 — An MILP-Based Solution Scheme for Factored and Robust Factored Markov Decision Processes
Factored Markov decision processes (MDPs) are a prominent paradigm within the artificial intelligence community for modeling and solving large-scale MDPs whose rewards and dynamics decompose into smaller, loosely interacting components. Through the use of dynamic Bayesian networks and context-specific independence, factored MDPs can achieve an exponential reduction in the state space of an MDP and thus scale to problem sizes that are beyond the reach of classical MDP algorithms. However, factored MDPs are typically solved using custom-designed algorithms that can require meticulous implementations and considerable fine-tuning. In this paper, we propose a mathematical programming approach to solving factored MDPs. In contrast to existing solution schemes, our approach leverages off-the-shelf solvers, which allows for a streamlined implementation and maintenance; it effectively capitalizes on the factored structure present in both state and action spaces; and it readily extends to the largely unexplored class of robust factored MDPs, whose transition kernels are only known to reside in a pre-specified ambiguity set. Our numerical experiments demonstrate the potential of our approach.
3 — 09:30 — Wasserstein Adversarial Logistic Regression with Synthetic Data
Empirical risk minimization often fails to provide robustness against adversarial attacks in test data, causing poor out-of-sample performance. Adversarial training (AT) has thus emerged as the de facto standard for hedging against such attacks. Although robust against adversarial attacks, AT does not address robustness against distributional ambiguity and is prone to overfitting. In order to address this, we study adversarial training of logistic regression (LR) over a Wasserstein ambiguity set around the empirical distribution. Furthermore, we develop a framework to effectively leverage synthetic data in distributionally robust AT in order to reduce the conservatism of distributional robustness. Focusing on the resulting distributionally robust adversarial LR with synthetic data, we analyze the complexity and properties of the underlying optimization problem, develop tractable approximation algorithms, and demonstrate that our method consistently outperforms benchmark models on standard real-world benchmark datasets.
4 — 10:00 — End-to-end Conditional Robust Optimization
The field of Contextual Optimization (CO) integrates machine learning and optimization to solve decision making problems under uncertainty. Recently, a risk sensitive variant of CO, known as Conditional Robust Optimization (CRO), combines uncertainty quantification with robust optimization in order to promote safety and reliability in high stake applications. Exploiting modern differentiable optimization methods, we propose a novel end-to-end approach to train a CRO model in a way that accounts for both the empirical risk of the prescribed decisions and the quality of conditional coverage of the contextual uncertainty set that supports them. While guarantees of success for the latter objective are impossible to obtain from the point of view of conformal prediction theory, high quality conditional coverage is achieved empirically by ingeniously employing a logistic regression differentiable layer within the calculation of coverage quality in our training loss. We show that the proposed training algorithms produce decisions that outperform the traditional estimate then optimize approaches.