108:30 — Optimal counterfactual explanations for k-Nearest Neighbors using Mathematical Optimization and Constraint Programming

Within the topic of explainable AI, counterfactual explanations to classifiers have received significant recent attention. We study counterfactual explanations that try to explain why a data point received an undesirable classification by providing the closest data point that would have received a desirable one. Within the context of one the simplest and most popular classification models -- k-nearest neighbors (k-NN) -- the solution to such optimal counterfactual explanation is still
very challenging computationally. In this work, we present techniques that significantly improve the computational times to find such optimal counterfactual explanations for k-NN.

209:00 — MORBDD: Multiobjective Restricted Binary Decision Diagrams by Learning to Sparsify

In multicriteria decision-making, a user seeks a set of non-dominated solutions to a (constrained) multiobjective optimization problem, the so-called Pareto frontier. In this work, we seek to bring a state-of-the-art method for exact multiobjective integer linear programming into the heuristic realm. We focus on binary decision diagrams (BDDs) which first construct a graph that represents all feasible solutions to the problem and then traverse the graph to extract the Pareto frontier. Because the Pareto frontier may be exponentially large, enumerating it over the BDD can be time-consuming. We explore how restricted BDDs, which have already been shown to be effective as heuristics for single-objective problems, can be adapted to multiobjective optimization through the use of machine learning (ML). MORBDD, our ML-based BDD sparsifier, first trains a binary classifier to eliminate BDD nodes that are unlikely to contribute to Pareto solutions, then post-processes the sparse BDD to ensure its connectivity via optimization. Experimental results on multiobjective knapsack problems show that MORBDD is highly effective at producing very small restricted BDDs with excellent approximation quality, outperforming width-limited restricted BDDs and the well-known evolutionary algorithm NSGA-II.

309:30 — Learning Lagrangian Multipliers for the Travelling Salesman Problem

Lagrangian relaxation is a versatile mathematical technique employed to relax constraints in an optimization problem, enabling the generation of dual bounds to prove the optimality of feasible solutions and the design of efficient propagators in constraint programming (such as the weighted circuit constraint). However, the conventional process of deriving Lagrangian multipliers (e.g., using subgradient methods) is often computationally intensive, limiting its practicality for large-scale or time-sensitive problems. To address this challenge, we propose an innovative unsupervised learning approach that harnesses the capabilities of graph neural networks to exploit the problem structure, aiming to generate accurate Lagrangian multipliers efficiently.
We apply this technique to the well-known Held-Karp Lagrangian relaxation for the traveling salesman problem. The core idea is to predict accurate Lagrangian multipliers and to employ them as a warm start for generating Held-Karp relaxation bounds. These bounds are subsequently utilized to enhance the filtering process carried out by branch-and-bound algorithms. In contrast to much of the existing literature, which primarily focuses on finding feasible solutions, our approach operates on the dual side, demonstrating that learning can also accelerate the proof of optimality. We conduct experiments across various distributions of the metric traveling salesman problem, considering instances with up to 200 cities. The results illustrate that our approach can improve the filtering level of the weighted circuit global constraint, reduce the optimality gap by a factor of two for unsolved instances up to a timeout, and reduce the execution time for solved instances by 10\%.

410:00 — Constraint Learning to Define Trust Regions in Optimization Over Pre-Trained Predictive Models

There is a recent proliferation of research on the integration of machine learning and optimization. One expansive area within this research stream is optimization over pre-trained predictive models, which proposes the use of pre-trained predictive models as surrogates for uncertain or highly complex objective functions. In this setting, features of the predictive models become decision variables in the optimization problem. Despite a recent surge in publications in this area, only a few papers note the importance of incorporating trust-region considerations in this decision-making pipeline, i.e., enforcing solutions to be similar to the data used to train the predictive models. Without such constraints, the evaluation of the predictive model at solutions obtained from optimization cannot be trusted and the practicality of the solutions may be unreasonable. In this paper, we provide an overview of the approaches appearing in the literature to construct a trust region, and propose three alternative approaches. Our numerical evaluation highlights that trust-region constraints learned through our newly proposed approaches, compare favorably with previously suggested approaches, both in terms of solution quality and computational time.